[{"data":1,"prerenderedAt":2413},["ShallowReactive",2],{"content:/software-testing/test-automation/what-would-you-stop-doing-when-ui-tests-are-flaky":3,"category:/software-testing/test-automation/what-would-you-stop-doing-when-ui-tests-are-flaky":6,"read-next:/software-testing/test-automation/how-to-handle-failing-tests-caused-by-known-bugs,/software-testing/test-automation/ai-in-testing-2026-state-of-the-industry":405},{"id":4,"title":5,"bmcUsername":6,"body":7,"cover":394,"date":395,"description":396,"draft":397,"extension":398,"features":6,"githubRepo":6,"headline":6,"highlight":6,"icon":6,"meta":399,"navigation":400,"npmPackage":6,"order":6,"path":401,"seo":402,"stem":403,"__hash__":404},"content/software-testing/test-automation/what-would-you-stop-doing-when-ui-tests-are-flaky.md","What Would You Stop Doing When UI Tests Are Flaky?",null,{"type":8,"value":9,"toc":378},"minimark",[10,19,25,36,39,42,45,50,54,69,76,78,82,103,118,121,128,131,133,137,140,169,172,179,181,185,192,195,200,203,206,210,213,228,239,244,249,260,263,267,270,273,276,279,282,285,289,292,299,302,317,320,324,327,330,333,335,339,342,352,362,368,374],[11,12,13,14,18],"p",{},"This question ",[15,16,17],"em",{},"about"," an interview question was recently posted in a QA forum, and the discussion it generated is more interesting than the question itself:",[20,21,22],"blockquote",{},[11,23,24],{},"\"What would you stop doing when UI tests are flaky?\"",[11,26,27,28,31,32,35],{},"The phrasing trips people up. Most interview questions ask what you ",[15,29,30],{},"would do",", essentially what's your process, how do you handle it, what tools do you reach for. This one inverts it. It's asking about habits to ",[15,33,34],{},"eliminate",", which implies the interviewer already assumes you have them. It's also, perhaps intentionally, phrased awkwardly.",[11,37,38],{},"I've spent over 20 years in software testing across fintech, SaaS HCM, and insurtech and currently serve as the Director of Quality Engineering at my current employer. I haven't been asked this question in exactly this phrasing, but I've used similar ones from the other side of the table. I know what this type of question is designed to surface.",[11,40,41],{},"Before we get to the answer, let's look at what the QA community said. See if you can guess or click reveal to see all the survey responses.",[43,44],"hr",{},[46,47,49],"h2",{"id":48},"survey-says-what-the-qa-community-answered-this-interview-question","Survey Says — What the QA Community Answered This Interview Question",[51,52],"flaky-test-survey",{":answers":53},"[{\"text\":\"Stop using sleep() / fix timing and waits\",\"votes\":25,\"keywords\":[\"sleep\",\"pause\",\"timing\",\"wait\",\"thread.sleep\",\"time.sleep\"]},{\"text\":\"Investigate root cause first\",\"votes\":11,\"keywords\":[\"investigate\",\"root cause\",\"diagnose\",\"why\",\"cause\",\"reason\"]},{\"text\":\"Quarantine tests from CI\",\"votes\":3,\"keywords\":[\"quarantine\",\"mute\",\"skip\",\"disable\",\"isolate\"]},{\"text\":\"Stop automating an unstable UI\",\"votes\":2,\"keywords\":[\"unstable\",\"automat\",\"flaky ui\",\"not ready\"]},{\"text\":\"Stop adding more tests\",\"votes\":2,\"keywords\":[\"adding\",\"add test\",\"more test\",\"new test\",\"expand\"]},{\"text\":\"Stop running tests in parallel\",\"votes\":1,\"keywords\":[\"parallel\",\"concurrent\",\"simultaneously\"]}]",[11,55,56,57,60,61,64,65,68],{},"The most popular community answers were technical and relatable — ",[15,58,59],{},"stop using sleep()",", ",[15,62,63],{},"fix timing and waits"," — the instinctive responses from anyone who has spent time debugging intermittent failures. ",[15,66,67],{},"Investigate root cause first"," ranked lower by sheer volume but drew the most endorsement from people who paused to think about what was actually being asked.",[11,70,71,72,75],{},"I also ran a LinkedIn poll with the same question. It had 357 impressions and only 5 votes — low participation — but those 5 voters unanimously chose ",[15,73,74],{},"investigate root cause first",". The gap between the free-comment community vote pattern and the forced-choice poll result is itself telling: when people had to commit to one answer, they chose the diagnostic approach. When free-commenting, they led with the most relatable war story.",[43,77],{},[46,79,81],{"id":80},"why-most-candidates-answer-the-wrong-question","Why Most Candidates Answer the Wrong Question",[11,83,84,85,60,88,60,91,94,95,98,99,102],{},"Here's what's worth pausing on: many of the most popular community answers — including ",[15,86,87],{},"quarantine tests from CI",[15,89,90],{},"add retry logic",[15,92,93],{},"report flakiness to the dev team"," — are valid responses to \"what would you ",[15,96,97],{},"do"," about flaky tests.\" They are not answers to \"what would you ",[15,100,101],{},"stop"," doing.\"",[11,104,105,106,109,110,113,114,117],{},"Quarantining is an action you ",[15,107,108],{},"add"," to your process. Retries are something you ",[15,111,112],{},"implement",". Reporting is something you ",[15,115,116],{},"start"," doing. None of these are things you stop.",[11,119,120],{},"The community's own discussion demonstrated the exact failure mode the question is designed to surface: answering a different question than the one being asked.",[11,122,123,124,127],{},"This is worth a conscious moment when you're in an interview seat. Before diving in, restate the question: ",[15,125,126],{},"\"So you're asking what habits I'd stop — not what I'd add to my process?\""," That one sentence signals precision under pressure, and precision matters.",[11,129,130],{},"When I'm conducting an interview, if a candidate is giving an answer that feels off, I'll ask them to repeat back their understanding of the question. Sometimes they're just wrong, but more often they didn't fully process it in the moment due to nerves, language barrier, or, in the case of remote interviews, dropped audio packets. The candidates who handle interviews best are the ones who preemptively restate their understanding before answering. It reads as both confident and careful (good qualities for testers and quality engineers).",[43,132],{},[46,134,136],{"id":135},"what-this-flaky-test-interview-question-is-actually-testing","What This Flaky Test Interview Question Is Actually Testing",[11,138,139],{},"This question tests at least four things at once:",[141,142,143,151,157,163],"ol",{},[144,145,146,150],"li",{},[147,148,149],"strong",{},"Technical knowledge"," — Do you know the common anti-patterns that cause flaky UI tests?",[144,152,153,156],{},[147,154,155],{},"Diagnostic thinking"," — Can you reason about root causes rather than recite a fix list?",[144,158,159,162],{},[147,160,161],{},"Listening comprehension"," — Did you actually process what was asked?",[144,164,165,168],{},[147,166,167],{},"Confidence to challenge ambiguity"," — Will the candidate accept the awkwardly worded question or point that out and ask for clarification?",[11,170,171],{},"A junior answer names tactics: stop using sleep, fix your waits, add retries. Not wrong, but symptom-level.",[11,173,174,175,178],{},"An experienced answer narrates a ",[15,176,177],{},"thought process"," — how you'd identify what's causing the flakiness before deciding what to change. The \"stop doing\" framing is a clue. It's asking which habits you've already had to unlearn, implying you've operated at enough scale to have learned them the hard way.",[43,180],{},[46,182,184],{"id":183},"what-to-stop-doing-when-ui-tests-are-flaky-the-full-answer","What to Stop Doing When UI Tests Are Flaky: The Full Answer",[11,186,187,188,191],{},"If asked this question in an interview, I'd clarify the framing first: ",[15,189,190],{},"\"Are you asking about common anti-patterns that lead to flakiness, or more about how I'd approach the investigation?\""," That distinction matters, and asking it signals diagnostic thinking before the answer even starts.",[11,193,194],{},"If they want the approach angle, this is how I'd answer.",[196,197,199],"h3",{"id":198},"stop-adding-tests-to-an-unstable-suite","Stop Adding Tests to an Unstable Suite",[11,201,202],{},"This would be my first answer, and I'd lead with it.",[11,204,205],{},"Adding tests to a flaky suite compounds the problem. Every new test inherits the instability of the environment it runs in. Before expanding coverage, you need to stop the bleeding and understand whether the flakiness lives in the test code, the application behavior, or the infrastructure. That distinction determines the shape of your fix.",[196,207,209],{"id":208},"stop-using-sleep-and-pause-statements","Stop Using sleep() and pause Statements",[11,211,212],{},"This is the answer that generates the most community agreement, and for good reason — it's the most widespread bad habit in UI test automation.",[11,214,215,219,220,223,224,227],{},[216,217,218],"code",{},"sleep()"," and ",[216,221,222],{},"pause"," are blunt instruments. They wait a fixed amount of time regardless of whether the condition they're waiting for became true a second in or never became true at all. They're slow, brittle, and mask the real problem: the test doesn't know what it's waiting ",[15,225,226],{},"for",".",[11,229,230,231,234,235,238],{},"This is so well understood that Playwright formally marks ",[216,232,233],{},"page.waitForTimeout()"," as ",[15,236,237],{},"Discouraged"," in their own API docs:",[20,240,241],{},[11,242,243],{},"\"Never wait for timeout in production. Tests that wait for time are inherently flaky. Use Locator actions and web assertions that wait automatically.\"",[245,246],"external-link",{"href":247,"text":248},"https://playwright.dev/docs/api/class-page#page-wait-for-timeout","Playwright docs — page.waitForTimeout()",[11,250,251,252,255,256,259],{},"I've mandated the removal of pause statements from test suites I've managed and replaced them with explicit wait patterns — ",[216,253,254],{},"waitForElementPresent",", custom polling waits — anything that returns as soon as the condition is true rather than waiting out a fixed interval. I've added lint rules to prevent ",[216,257,258],{},".pause"," commands from being checked in at all. On one large serial suite, removing sleep and pause statements alone saved over an hour off the total test run time.",[11,261,262],{},"One practical detail: when setting a max wait timeout, I set it to roughly twice what I'd expect the worst case to be. CI environments consistently run slower than local development in ways that aren't always predictable. A wait that looks generous locally can time out under CI load.",[196,264,266],{"id":265},"stop-assuming-the-problem-is-in-the-test-code","Stop Assuming the Problem Is in the Test Code",[11,268,269],{},"Some flakiness isn't in the test at all.",[11,271,272],{},"I had a test that failed intermittently depending on what time of day the build kicked off. After investigation, the root cause was a timezone mismatch between the server under test and the system running the tests. A validation rule in the application behaved differently at a specific hour because of this offset — the test was faithfully catching real behavior, but it looked like random flakiness until you looked closely enough. The initial investigation was tricky because it would pass during normal business hours when we tried to reproduce the failure in the first place!",[11,274,275],{},"The fix was a conditional branch in the test to account for the business rule at that magic hour. I generally avoid conditional branched logic in tests — it adds complexity and makes tests harder to reason about. But we couldn't time-travel or alter system clocks, and the conditional was the honest solution.",[11,277,278],{},"The point: before assuming the test is broken, determine whether you're dealing with test code, an application bug, or an infrastructure mismatch. The investigation approach is different for each.",[11,280,281],{},"It's also worth noting that some intermittent failures aren't flakiness at all — they're the test catching a real intermittent bug in the application. A test that fails once and passes on the next re-run looks identical to a flaky test on the surface. One is noise; the other is a signal you're about to dismiss. This is why every failure deserves investigation before it gets written off.",[11,283,284],{},"The goal is a suite trustworthy enough that the team's first instinct when a test fails is \"it found something\" — not \"ugh, it's flaky, just re-run it.\" The moment re-running becomes the default response, it becomes an annoying car alarm at 3 AM instead of a useful tool.",[196,286,288],{"id":287},"stop-running-tests-in-parallel-without-isolating-shared-state","Stop Running Tests in Parallel Without Isolating Shared State",[11,290,291],{},"Parallelism is worth pursuing — the time savings on a large suite are significant, and it's one of the highest-leverage improvements you can make to CI feedback time. The problem isn't parallelism itself; it's running tests in parallel that were never designed for it.",[11,293,294,295,298],{},"Tests that share data, database state, or external resources become order-dependent and environment-dependent the moment you parallelize them. A suite that runs cleanly in serial can look deeply flaky in parallel for no obvious reason — because the flakiness is in the ",[15,296,297],{},"interaction"," between tests, not in any individual test.",[11,300,301],{},"The practical solution is to stop treating your suite as a single homogeneous run and start thinking in terms of what can safely run concurrently:",[303,304,305,311],"ul",{},[144,306,307,310],{},[147,308,309],{},"Read-only tests"," — tests that only query state without mutating it — are natural candidates for parallel execution. They can't interfere with each other.",[144,312,313,316],{},[147,314,315],{},"Write operations, state-dependent flows, and anything touching shared fixtures"," are better kept in a serial suite until you've isolated their data properly (unique test data per run, dedicated test accounts, isolated environments).",[11,318,319],{},"A combined approach — a parallel suite for safe tests and a serial suite for the rest — gets you most of the speed benefit while keeping the flakiness surface small. Once the serial tests are properly isolated with their own data, you can graduate them into the parallel suite over time.",[196,321,323],{"id":322},"stop-treating-flakiness-as-normal","Stop Treating Flakiness as Normal",[11,325,326],{},"The most damaging thing a team can do with a flaky test is shrug and accept it.",[11,328,329],{},"Flakiness trains everyone to ignore failures. Once the build becomes a noise generator instead of a signal, real regressions slip through unchallenged. A test suite that cries wolf is functionally worse than no test suite, because it creates false confidence.",[11,331,332],{},"I've used flakiness scoring in both BitBucket and BrowserStack Test Analytics to identify and mute the worst offenders. Muting is not the same as deleting: the test still runs, it just doesn't fail the build while it's under investigation. That distinction matters — it preserves your ability to track whether improvements helped without letting the instability contaminate every build in the meantime.",[43,334],{},[46,336,338],{"id":337},"how-to-answer-flaky-ui-test-interview-questions","How to Answer Flaky UI Test Interview Questions",[11,340,341],{},"A few framing notes regardless of how you structure your answer:",[11,343,344,347,348,351],{},[147,345,346],{},"Restate first."," Before diving in, confirm you understood the question. ",[15,349,350],{},"\"So you're asking what habits I'd stop, not what I'd add to my process?\""," One sentence of confirmation demonstrates careful listening — which is arguably what the question is testing most.",[11,353,354,357,358,361],{},[147,355,356],{},"Narrate, don't list."," A list of tactics sounds like you memorized a checklist. A thought process — ",[15,359,360],{},"\"I'd start by determining whether this is test code, application behavior, or environment, because the fix is different for each\""," — sounds like someone who has actually dealt with this at scale.",[11,363,364,367],{},[147,365,366],{},"Distinguish the problem type."," Not all flakiness has the same root cause. Timing issues, shared state, environment inconsistency, and automating an unstable UI are four different problems with four different fixes. Showing you can distinguish them is what separates a good answer from a more experienced one.",[11,369,370,373],{},[147,371,372],{},"Own a specific example."," The most memorable interview answers are concrete. If you've refactored a suite full of sleep statements, or tracked down a timezone mismatch that looked like random flakiness for weeks, say so. Specific experience is more credible than correct-sounding generalizations.",[375,376],"read-next",{":items":377},"[\"/software-testing/test-automation/how-to-handle-failing-tests-caused-by-known-bugs\",\"/software-testing/test-automation/ai-in-testing-2026-state-of-the-industry\"]",{"title":379,"searchDepth":380,"depth":380,"links":381},"",2,[382,383,384,385,393],{"id":48,"depth":380,"text":49},{"id":80,"depth":380,"text":81},{"id":135,"depth":380,"text":136},{"id":183,"depth":380,"text":184,"children":386},[387,389,390,391,392],{"id":198,"depth":388,"text":199},3,{"id":208,"depth":388,"text":209},{"id":265,"depth":388,"text":266},{"id":287,"depth":388,"text":288},{"id":322,"depth":388,"text":323},{"id":337,"depth":380,"text":338},"/images/posts/what-would-you-stop-doing-when-ui-tests-are-flaky/what-would-you-stop-doing-when-ui-tests-are-flaky-cover.webp","2026-05-16","Most QA engineers answer this interview question confidently wrong. Here's what \"What would you stop doing when UI tests are flaky?\" is actually testing and what an experienced answer sounds like.",false,"md",{},true,"/software-testing/test-automation/what-would-you-stop-doing-when-ui-tests-are-flaky",{"title":5,"description":396},"software-testing/test-automation/what-would-you-stop-doing-when-ui-tests-are-flaky","UQqz7A_Yr_cEqc_30DhsOl8VbWO3BGTgvAn5DQzv65I",[406,2043],{"id":407,"title":408,"bmcUsername":6,"body":409,"cover":2035,"date":2036,"description":2037,"draft":397,"extension":398,"features":6,"githubRepo":6,"headline":6,"highlight":6,"icon":6,"meta":2038,"navigation":400,"npmPackage":6,"order":6,"path":2039,"seo":2040,"stem":2041,"__hash__":2042},"content/software-testing/test-automation/how-to-handle-failing-tests-caused-by-known-bugs.md","How to Handle Failing Tests Caused by a Known Bug",{"type":8,"value":410,"toc":2008},[411,414,419,422,425,427,431,437,440,444,449,452,463,466,469,476,480,483,494,497,501,504,511,514,517,519,523,526,543,641,646,649,657,660,664,700,704,707,718,721,723,727,730,733,737,740,743,850,861,864,875,877,881,885,1038,1048,1060,1064,1266,1275,1281,1285,1368,1374,1377,1464,1470,1476,1480,1558,1564,1568,1603,1609,1613,1639,1645,1649,1698,1704,1708,1799,1947,1953,1955,1959,1989,1991,1995,1998,2001,2004],[11,412,413],{},"A question came up on a developer forum recently for a solution to a problem that occurs in almost every engineering team eventually:",[20,415,416],{},[11,417,418],{},"\"If a test has already found a bug, one option is to comment the test out until the issue is fixed. However, this has to be done manually, and it becomes time-consuming and hard to manage when there are many tests. How do you handle this in your workflow?\"",[11,420,421],{},"The fact that commenting it out is the assumed default is why I wanted to write this article. Commenting out often feels like the obvious move: the test is noisy, you can't fix the bug right now, so you silence it and move on. However, those with experience know that decision has consequences that only become visible weeks or months later when you've forgotten the test ever existed.",[11,423,424],{},"There's a better pattern to temporarily skip or disable your tests, and every major test framework already supports it.",[43,426],{},[46,428,430],{"id":429},"the-three-wrong-answers","The Three Wrong Answers",[11,432,433,434,227],{},"As a test engineer, I want all my bugs fixed as soon as I find them, but in a practical sense that isn't always possible. In Kanban iterations and Scrum team sprints there may not be enough capacity in the maintenance or bug fix bucket to address bugs triaged ",[15,435,436],{},"below the line",[11,438,439],{},"So when a test is failing due to a confirmed bug that won't be fixed this sprint, there are four options: leave it failing, comment it out, delete it, or skip (disable) it. The first three are wrong. Let's explore why.",[196,441,443],{"id":442},"why-leaving-a-failing-test-in-ci-breaks-your-build-signal","Why Leaving a Failing Test in CI Breaks Your Build Signal",[20,445,446],{},[11,447,448],{},"The test documents a failing test so leave the build failing until its resolved since it reflects reality",[11,450,451],{},"While one could argue it makes sense to keep the test failing until the bug its detecting is resolved, in practice, its a terrible idea to check in a known failing test.",[303,453,454,457,460],{},[144,455,456],{},"It breaks your build pipeline",[144,458,459],{},"The defect may not be fixable for a long time due to priorities or complexity",[144,461,462],{},"An always red, broken, build gets ignored and let's more bugs sneak in",[11,464,465],{},"A red build that everyone knows is \"just that known bug\" trains the team to ignore red builds. It's like your house alarm going off because someone smashed a window. If you leave the alarm going without fixing anything, you won't notice when someone kicks in the backdoor and robs you again. A failing test everyone ignores is a disabled alarm and a low severity defect in a complex area, for example, may sit unresolved for months given real sprint priorities. The build can't stay red that entire time.",[11,467,468],{},"With the skip pattern, that we'll discuss, it silences the noise deliberately and intentionally, with a paper trail, so the alarm means something again.",[11,470,471,472,475],{},"With that said, there are exceptions. For example, if ",[15,473,474],{},"existing tests"," fail due to a code change, breaking functionality the tests are covering, the build should stay red until the change is reverted or bug that was introduced is fixed. This is different than adding a known failing test to an otherwise green build.",[196,477,479],{"id":478},"delete-the-test","Delete the Test",[11,481,482],{},"Another approach would be to delete the failing test, but I've almost never seen this done in practice.",[303,484,485,488,491],{},[144,486,487],{},"You lose coverage",[144,489,490],{},"Someone has to write or put back the test again later, wasteful and error prone",[144,492,493],{},"Easy to forget about",[11,495,496],{},"Again, the skip pattern is the better approach to disable the test.",[196,498,500],{"id":499},"why-commenting-out-a-failing-test-is-worse-than-it-seems","Why Commenting Out a Failing Test Is Worse Than It Seems",[11,502,503],{},"Commenting out the test seems like a natural way of handling this. Teams do it all the time when temporarily disabling code for debugging. It seems natural to do it for the tests as well. You can just uncomment it later, but those who've worked in legacy code bases know how they are graveyards of forgotten code comments. Tests can have the same fate.",[11,505,506,507,510],{},"Commented-out test code is invisible to your tooling, silently rots, and is almost guaranteed to be forgotten. Outside of maybe, ",[216,508,509],{},"TODO:"," patterns, there is no reminder in your codebase to reenable them nor how many have accumulated.",[11,512,513],{},"I've seen this play out directly: a test was commented out when a bug was discovered, and it stayed that way until a major cleanup initiative was launched specifically to find dead code and commented-out blocks. When the team went to re-enable it, the codebase had drifted so far that the test was no longer compatible. It had to be rewritten from scratch, not simply re-enabled. The original time investment in writing it produced zero long-term value, and there was no way to know how long that coverage gap had existed or what had shipped during it.",[11,515,516],{},"Now, let's discuss the correct way of handling failing tests for bugs that can't be fixed quickly.",[43,518],{},[46,520,522],{"id":521},"how-to-skip-a-failing-test-the-right-way","How to Skip a Failing Test the Right Way",[11,524,525],{},"Every major test framework has a built-in skip mechanism for this very scenario. Use it.",[141,527,528,531,534,537,540],{},[144,529,530],{},"Create a bug ticket for the issue in your team's bug tracking system.",[144,532,533],{},"Note the defect number.",[144,535,536],{},"Use the test.skip syntax for your test framework to disable/skip the test programmatically",[144,538,539],{},"Include a TODO comment to unskip or reenable the test once the bug is resolved.",[144,541,542],{},"Note the location of the test in the bug ticket with instructions to enable and run the test to verify the defect is resolved and to check in the test update with the bug fix.",[544,545,550],"pre",{"className":546,"code":547,"filename":548,"language":549,"meta":379,"style":379},"language-typescript shiki shiki-themes material-theme-lighter github-light-high-contrast github-dark-high-contrast","// Don't do this: invisible, rots silently, easy to forget\n// test('user can reset password', async () => { ... })\n\n// Do this: explicit, visible in reports, linked to the bug\n\n// TODO: Test finding BUG#4521 - password reset endpoint returns 500, re-enable when fixed\ntest.skip('user can reset password', async () => { ... })\n","skipped-test-example.ts","typescript",[216,551,552,561,566,571,577,582,588],{"__ignoreMap":379},[553,554,557],"span",{"class":555,"line":556},"line",1,[553,558,560],{"class":559},"s_gjE","// Don't do this: invisible, rots silently, easy to forget\n",[553,562,563],{"class":555,"line":380},[553,564,565],{"class":559},"// test('user can reset password', async () => { ... })\n",[553,567,568],{"class":555,"line":388},[553,569,570],{"emptyLinePlaceholder":400},"\n",[553,572,574],{"class":555,"line":573},4,[553,575,576],{"class":559},"// Do this: explicit, visible in reports, linked to the bug\n",[553,578,580],{"class":555,"line":579},5,[553,581,570],{"emptyLinePlaceholder":400},[553,583,585],{"class":555,"line":584},6,[553,586,587],{"class":559},"// TODO: Test finding BUG#4521 - password reset endpoint returns 500, re-enable when fixed\n",[553,589,591,595,598,602,605,609,613,615,618,622,625,628,631,635,638],{"class":555,"line":590},7,[553,592,594],{"class":593},"sZ-rw","test",[553,596,227],{"class":597},"sPJuK",[553,599,601],{"class":600},"sb1SK","skip",[553,603,604],{"class":593},"(",[553,606,608],{"class":607},"sZi47","'",[553,610,612],{"class":611},"srGNg","user can reset password",[553,614,608],{"class":607},[553,616,617],{"class":597},",",[553,619,621],{"class":620},"stWsX"," async",[553,623,624],{"class":597}," ()",[553,626,627],{"class":620}," =>",[553,629,630],{"class":597}," {",[553,632,634],{"class":633},"sE6rD"," ...",[553,636,637],{"class":597}," }",[553,639,640],{"class":593},")\n",[11,642,643],{},[15,644,645],{},"Some test frameworks also allow inline comments as a test.skip or test disable parameter alleviating the need for a seperate TODO comment line",[11,647,648],{},"Unlike commented-out code, a skipped test still surfaces in your run reports:",[544,650,655],{"className":651,"code":653,"language":654},[652],"language-text","12 passed, 0 failed, 1 skipped\n","text",[216,656,653],{"__ignoreMap":379},[11,658,659],{},"That count is a standing reminder that something needs to come back. It shows up on every run, in every CI report, without anyone having to go looking for it.",[196,661,663],{"id":662},"why-skip-beats-commenting-out","Why Skip Beats Commenting Out",[303,665,666,672,682,688,694],{},[144,667,668,671],{},[147,669,670],{},"Commented-out tests are completely invisible."," No skipped count, no reason string, no indication in test output that anything is missing. The gap is hidden from anyone reviewing CI results.",[144,673,674,677,678,681],{},[147,675,676],{},"Comments don't surface in TODO tracking."," IDEs and code review tools can surface ",[216,679,680],{},"// TODO"," comments as actionable items. A commented-out test block is dead code. It won't appear in any report or task list prompting someone to revisit it.",[144,683,684,687],{},[147,685,686],{},"Commented-out code goes stale silently."," As the codebase evolves, commented-out tests develop broken syntax, outdated method calls, and references to renamed or removed APIs. Nobody notices because the code never has to compile. When someone eventually tries to re-enable it, they're restoring broken code.",[144,689,690,693],{},[147,691,692],{},"Skipped tests still compile."," A skipped test is live code. In typed languages, if a method is renamed or a parameter type changes, the skipped test will surface a compile error immediately. The breakage is caught, not hidden.",[144,695,696,699],{},[147,697,698],{},"Skip reasons are searchable."," Searching the codebase for a ticket number instantly finds every test gated on that bug.",[196,701,703],{"id":702},"linking-skipped-tests-to-bug-tickets","Linking Skipped Tests to Bug Tickets",[11,705,706],{},"The skip pattern only closes the loop if both sides reference each other:",[141,708,709,712,715],{},[144,710,711],{},"The skip reason includes the bug ticket number or URL",[144,713,714],{},"The bug ticket description references the test file and test name",[144,716,717],{},"Re-enabling the test is an explicit step in the bug fix, not an afterthought",[11,719,720],{},"When the bug is fixed, the developer checks the ticket, finds the test reference, re-enables it, and verifies it passes before closing. This makes test restoration a first-class step in the fix workflow rather than something that gets remembered, or more often, forgotten.",[43,722],{},[46,724,726],{"id":725},"a-fair-counterpoint","A Fair Counterpoint",[11,728,729],{},"A commenter responding to the forum thread made a fair point: the skip pattern is technically the right answer, but it still requires discipline. Skipped tests are easy to ignore. It takes active effort to monitor the skipped count, prioritize the underlying bugs, and actually re-enable tests when fixes land. Otherwise, skipped tests accumulate and become their own form of technical debt.",[11,731,732],{},"That's true. But the same discipline argument applies even more strongly to commented-out tests. A skipped count is visible in every CI run: it's a number that can be tracked, trended, and reviewed in sprint planning. A commented-out test shows up nowhere. If discipline is the concern, the approach that provides the most visibility is the better starting point.",[196,734,736],{"id":735},"using-a-ci-gate-to-enforce-a-skipped-test-threshold","Using a CI Gate to Enforce a Skipped Test Threshold",[11,738,739],{},"If skipped count drift is a real concern for your team, you can turn discipline into policy with a CI gate that fails the build if the skipped count exceeds a defined threshold.",[11,741,742],{},"To my knowledge neither Jest nor JUnit have a built-in threshold option for this, but there is a practical, framework-agnostic, approach using a two-step GitHub Actions pattern: parse your JUnit XML test output to extract the skipped count, then fail the step if it exceeds your threshold.",[544,744,749],{"className":745,"code":746,"filename":747,"language":748,"meta":379,"style":379},"language-yaml shiki shiki-themes material-theme-lighter github-light-high-contrast github-dark-high-contrast","- uses: mikepenz/action-junit-report@v4\n  id: junit\n  with:\n    report_paths: '**/test-results/*.xml'\n\n- name: Fail if skipped tests exceed threshold\n  if: fromJson(steps.junit.outputs.skipped) > 5\n  run: |\n    echo \"Skipped test count (${{ steps.junit.outputs.skipped }}) exceeds threshold of 5\"\n    exit 1\n",".github/workflows/test.yml","yaml",[216,750,751,766,776,784,800,804,816,826,838,844],{"__ignoreMap":379},[553,752,753,756,760,763],{"class":555,"line":556},[553,754,755],{"class":597},"-",[553,757,759],{"class":758},"saWzx"," uses",[553,761,762],{"class":597},":",[553,764,765],{"class":611}," mikepenz/action-junit-report@v4\n",[553,767,768,771,773],{"class":555,"line":380},[553,769,770],{"class":758},"  id",[553,772,762],{"class":597},[553,774,775],{"class":611}," junit\n",[553,777,778,781],{"class":555,"line":388},[553,779,780],{"class":758},"  with",[553,782,783],{"class":597},":\n",[553,785,786,789,791,794,797],{"class":555,"line":573},[553,787,788],{"class":758},"    report_paths",[553,790,762],{"class":597},[553,792,793],{"class":607}," '",[553,795,796],{"class":611},"**/test-results/*.xml",[553,798,799],{"class":607},"'\n",[553,801,802],{"class":555,"line":579},[553,803,570],{"emptyLinePlaceholder":400},[553,805,806,808,811,813],{"class":555,"line":584},[553,807,755],{"class":597},[553,809,810],{"class":758}," name",[553,812,762],{"class":597},[553,814,815],{"class":611}," Fail if skipped tests exceed threshold\n",[553,817,818,821,823],{"class":555,"line":590},[553,819,820],{"class":758},"  if",[553,822,762],{"class":597},[553,824,825],{"class":611}," fromJson(steps.junit.outputs.skipped) > 5\n",[553,827,829,832,834],{"class":555,"line":828},8,[553,830,831],{"class":758},"  run",[553,833,762],{"class":597},[553,835,837],{"class":836},"sZTni"," |\n",[553,839,841],{"class":555,"line":840},9,[553,842,843],{"class":611},"    echo \"Skipped test count (${{ steps.junit.outputs.skipped }}) exceeds threshold of 5\"\n",[553,845,847],{"class":555,"line":846},10,[553,848,849],{"class":611},"    exit 1\n",[11,851,852,853,856,857,860],{},"This works for any framework that outputs JUnit XML: Jest via ",[216,854,855],{},"jest-junit",", Playwright via its built-in JUnit reporter, pytest via ",[216,858,859],{},"pytest-junit",", and JUnit 5 natively. The threshold should reflect what's acceptable for your team. Even setting it generously and trending the number over sprints is more actionable than having no visibility at all.",[11,862,863],{},"Critically, this kind of gate is only possible with skips. You cannot gate on commented-out tests because your tooling has no visibility into them.",[11,865,866,867,870,871,874],{},"For teams using Jest, the ",[216,868,869],{},"eslint-plugin-jest/no-disabled-tests"," linting rule is a useful complement. It catches ",[216,872,873],{},"test.skip()"," at code review time, before it reaches CI.",[43,876],{},[46,878,880],{"id":879},"test-skip-syntax-by-framework","Test Skip Syntax by Framework",[196,882,884],{"id":883},"jest-and-vitest","Jest and Vitest",[544,886,889],{"className":546,"code":887,"filename":888,"language":549,"meta":379,"style":379},"// Skip a single test\ntest.skip('user can reset password', () => {\n  // Bug #4521: password reset endpoint returns 500\n})\n\n// Skip a suite\ndescribe.skip('Password Reset', () => { ... })\n\n// Older alias syntax, both work\nxit('user can reset password', () => { ... })\nxdescribe('Password Reset', () => { ... })\n","jest-vitest-test-skip-example.ts",[216,890,891,896,921,926,933,937,942,974,978,983,1010],{"__ignoreMap":379},[553,892,893],{"class":555,"line":556},[553,894,895],{"class":559},"// Skip a single test\n",[553,897,898,900,902,904,906,908,910,912,914,916,918],{"class":555,"line":380},[553,899,594],{"class":593},[553,901,227],{"class":597},[553,903,601],{"class":600},[553,905,604],{"class":593},[553,907,608],{"class":607},[553,909,612],{"class":611},[553,911,608],{"class":607},[553,913,617],{"class":597},[553,915,624],{"class":597},[553,917,627],{"class":620},[553,919,920],{"class":597}," {\n",[553,922,923],{"class":555,"line":388},[553,924,925],{"class":559},"  // Bug #4521: password reset endpoint returns 500\n",[553,927,928,931],{"class":555,"line":573},[553,929,930],{"class":597},"}",[553,932,640],{"class":593},[553,934,935],{"class":555,"line":579},[553,936,570],{"emptyLinePlaceholder":400},[553,938,939],{"class":555,"line":584},[553,940,941],{"class":559},"// Skip a suite\n",[553,943,944,947,949,951,953,955,958,960,962,964,966,968,970,972],{"class":555,"line":590},[553,945,946],{"class":593},"describe",[553,948,227],{"class":597},[553,950,601],{"class":600},[553,952,604],{"class":593},[553,954,608],{"class":607},[553,956,957],{"class":611},"Password Reset",[553,959,608],{"class":607},[553,961,617],{"class":597},[553,963,624],{"class":597},[553,965,627],{"class":620},[553,967,630],{"class":597},[553,969,634],{"class":633},[553,971,637],{"class":597},[553,973,640],{"class":593},[553,975,976],{"class":555,"line":828},[553,977,570],{"emptyLinePlaceholder":400},[553,979,980],{"class":555,"line":840},[553,981,982],{"class":559},"// Older alias syntax, both work\n",[553,984,985,988,990,992,994,996,998,1000,1002,1004,1006,1008],{"class":555,"line":846},[553,986,987],{"class":600},"xit",[553,989,604],{"class":593},[553,991,608],{"class":607},[553,993,612],{"class":611},[553,995,608],{"class":607},[553,997,617],{"class":597},[553,999,624],{"class":597},[553,1001,627],{"class":620},[553,1003,630],{"class":597},[553,1005,634],{"class":633},[553,1007,637],{"class":597},[553,1009,640],{"class":593},[553,1011,1013,1016,1018,1020,1022,1024,1026,1028,1030,1032,1034,1036],{"class":555,"line":1012},11,[553,1014,1015],{"class":600},"xdescribe",[553,1017,604],{"class":593},[553,1019,608],{"class":607},[553,1021,957],{"class":611},[553,1023,608],{"class":607},[553,1025,617],{"class":597},[553,1027,624],{"class":597},[553,1029,627],{"class":620},[553,1031,630],{"class":597},[553,1033,634],{"class":633},[553,1035,637],{"class":597},[553,1037,640],{"class":593},[11,1039,1040,1041,219,1044,1047],{},"Vitest uses identical syntax to Jest. ",[216,1042,1043],{},"test.skip",[216,1045,1046],{},"describe.skip"," work the same way.",[11,1049,1050,1051,1055,1056],{},"Docs: ",[245,1052],{"href":1053,"text":1054},"https://jestjs.io/docs/api#describeskipname-fn","Jest skip"," · ",[245,1057],{"href":1058,"text":1059},"https://vitest.dev/api/test.html#test-skip","Vitest skip",[196,1061,1063],{"id":1062},"playwright","Playwright",[544,1065,1068],{"className":546,"code":1066,"filename":1067,"language":549,"meta":379,"style":379},"// Skip unconditionally\ntest.skip('user can reset password', async ({ page }) => {\n  // Bug #4521: password reset endpoint returns 500\n})\n\n// Skip conditionally, useful for browser-specific bugs\ntest('user can reset password', async ({ page, browserName }) => {\n  test.skip(browserName === 'webkit', 'Bug #4521: fails on Safari only')\n  // ...\n})\n\n// test.fixme: skips the test but signals it urgently needs attention\n// Shows up differently in the Playwright HTML report\ntest.fixme('user can reset password', async ({ page }) => {\n  // Bug #4521: password reset endpoint returns 500\n})\n","playwright-test-skip-disable-example.ts",[216,1069,1070,1075,1109,1113,1119,1123,1128,1159,1195,1200,1206,1210,1216,1222,1254,1259],{"__ignoreMap":379},[553,1071,1072],{"class":555,"line":556},[553,1073,1074],{"class":559},"// Skip unconditionally\n",[553,1076,1077,1079,1081,1083,1085,1087,1089,1091,1093,1095,1098,1102,1105,1107],{"class":555,"line":380},[553,1078,594],{"class":593},[553,1080,227],{"class":597},[553,1082,601],{"class":600},[553,1084,604],{"class":593},[553,1086,608],{"class":607},[553,1088,612],{"class":611},[553,1090,608],{"class":607},[553,1092,617],{"class":597},[553,1094,621],{"class":620},[553,1096,1097],{"class":597}," ({",[553,1099,1101],{"class":1100},"s2xgV"," page",[553,1103,1104],{"class":597}," })",[553,1106,627],{"class":620},[553,1108,920],{"class":597},[553,1110,1111],{"class":555,"line":388},[553,1112,925],{"class":559},[553,1114,1115,1117],{"class":555,"line":573},[553,1116,930],{"class":597},[553,1118,640],{"class":593},[553,1120,1121],{"class":555,"line":579},[553,1122,570],{"emptyLinePlaceholder":400},[553,1124,1125],{"class":555,"line":584},[553,1126,1127],{"class":559},"// Skip conditionally, useful for browser-specific bugs\n",[553,1129,1130,1132,1134,1136,1138,1140,1142,1144,1146,1148,1150,1153,1155,1157],{"class":555,"line":590},[553,1131,594],{"class":600},[553,1133,604],{"class":593},[553,1135,608],{"class":607},[553,1137,612],{"class":611},[553,1139,608],{"class":607},[553,1141,617],{"class":597},[553,1143,621],{"class":620},[553,1145,1097],{"class":597},[553,1147,1101],{"class":1100},[553,1149,617],{"class":597},[553,1151,1152],{"class":1100}," browserName",[553,1154,1104],{"class":597},[553,1156,627],{"class":620},[553,1158,920],{"class":597},[553,1160,1161,1164,1166,1168,1171,1174,1177,1179,1182,1184,1186,1188,1191,1193],{"class":555,"line":828},[553,1162,1163],{"class":593},"  test",[553,1165,227],{"class":597},[553,1167,601],{"class":600},[553,1169,604],{"class":1170},"sq0XF",[553,1172,1173],{"class":593},"browserName",[553,1175,1176],{"class":633}," ===",[553,1178,793],{"class":607},[553,1180,1181],{"class":611},"webkit",[553,1183,608],{"class":607},[553,1185,617],{"class":597},[553,1187,793],{"class":607},[553,1189,1190],{"class":611},"Bug #4521: fails on Safari only",[553,1192,608],{"class":607},[553,1194,640],{"class":1170},[553,1196,1197],{"class":555,"line":840},[553,1198,1199],{"class":559},"  // ...\n",[553,1201,1202,1204],{"class":555,"line":846},[553,1203,930],{"class":597},[553,1205,640],{"class":593},[553,1207,1208],{"class":555,"line":1012},[553,1209,570],{"emptyLinePlaceholder":400},[553,1211,1213],{"class":555,"line":1212},12,[553,1214,1215],{"class":559},"// test.fixme: skips the test but signals it urgently needs attention\n",[553,1217,1219],{"class":555,"line":1218},13,[553,1220,1221],{"class":559},"// Shows up differently in the Playwright HTML report\n",[553,1223,1225,1227,1229,1232,1234,1236,1238,1240,1242,1244,1246,1248,1250,1252],{"class":555,"line":1224},14,[553,1226,594],{"class":593},[553,1228,227],{"class":597},[553,1230,1231],{"class":600},"fixme",[553,1233,604],{"class":593},[553,1235,608],{"class":607},[553,1237,612],{"class":611},[553,1239,608],{"class":607},[553,1241,617],{"class":597},[553,1243,621],{"class":620},[553,1245,1097],{"class":597},[553,1247,1101],{"class":1100},[553,1249,1104],{"class":597},[553,1251,627],{"class":620},[553,1253,920],{"class":597},[553,1255,1257],{"class":555,"line":1256},15,[553,1258,925],{"class":559},[553,1260,1262,1264],{"class":555,"line":1261},16,[553,1263,930],{"class":597},[553,1265,640],{"class":593},[11,1267,1268,1271,1272,1274],{},[216,1269,1270],{},"test.fixme"," behaves like ",[216,1273,1043],{}," but communicates more urgency. Use it when the test needs to come back soon rather than being parked indefinitely.",[11,1276,1050,1277],{},[245,1278],{"href":1279,"text":1280},"https://playwright.dev/docs/test-annotations#skip-a-test","Playwright test annotations",[196,1282,1284],{"id":1283},"cypress","Cypress",[544,1286,1289],{"className":546,"code":1287,"filename":1288,"language":549,"meta":379,"style":379},"// Skip a single test\nit.skip('user can reset password', () => {\n  // Bug #4521: password reset endpoint returns 500\n})\n\n// Skip a suite\ndescribe.skip('Password Reset', () => { ... })\n","cypress-test-skip-example.ts",[216,1290,1291,1295,1320,1324,1330,1334,1338],{"__ignoreMap":379},[553,1292,1293],{"class":555,"line":556},[553,1294,895],{"class":559},[553,1296,1297,1300,1302,1304,1306,1308,1310,1312,1314,1316,1318],{"class":555,"line":380},[553,1298,1299],{"class":593},"it",[553,1301,227],{"class":597},[553,1303,601],{"class":600},[553,1305,604],{"class":593},[553,1307,608],{"class":607},[553,1309,612],{"class":611},[553,1311,608],{"class":607},[553,1313,617],{"class":597},[553,1315,624],{"class":597},[553,1317,627],{"class":620},[553,1319,920],{"class":597},[553,1321,1322],{"class":555,"line":388},[553,1323,925],{"class":559},[553,1325,1326,1328],{"class":555,"line":573},[553,1327,930],{"class":597},[553,1329,640],{"class":593},[553,1331,1332],{"class":555,"line":579},[553,1333,570],{"emptyLinePlaceholder":400},[553,1335,1336],{"class":555,"line":584},[553,1337,941],{"class":559},[553,1339,1340,1342,1344,1346,1348,1350,1352,1354,1356,1358,1360,1362,1364,1366],{"class":555,"line":590},[553,1341,946],{"class":593},[553,1343,227],{"class":597},[553,1345,601],{"class":600},[553,1347,604],{"class":593},[553,1349,608],{"class":607},[553,1351,957],{"class":611},[553,1353,608],{"class":607},[553,1355,617],{"class":597},[553,1357,624],{"class":597},[553,1359,627],{"class":620},[553,1361,630],{"class":597},[553,1363,634],{"class":633},[553,1365,637],{"class":597},[553,1367,640],{"class":593},[11,1369,1050,1370],{},[245,1371],{"href":1372,"text":1373},"https://docs.cypress.io/app/guides/migration/playwright-to-cypress#Test-structure-and-syntax-migration","Cypress test structure",[196,1375,1376],{"id":1376},"pytest",[544,1378,1383],{"className":1379,"code":1380,"filename":1381,"language":1382,"meta":379,"style":379},"language-python shiki shiki-themes material-theme-lighter github-light-high-contrast github-dark-high-contrast","import pytest\n\n# Skip unconditionally with reason\n@pytest.mark.skip(reason=\"Bug #4521: password reset endpoint returns 500\")\ndef test_user_can_reset_password():\n    ...\n\n# Skip conditionally, useful for environment-specific bugs\n@pytest.mark.skipif(os.getenv(\"ENV\") == \"staging\", reason=\"Bug #4521: only affects staging\")\ndef test_user_can_reset_password():\n    ...\n\n# xfail: marks as expected failure, test still runs\n# Use when you want the test to run but not break the build\n@pytest.mark.xfail(reason=\"Bug #4521: known failure, fix in progress\")\ndef test_user_can_reset_password():\n    ...\n","pytest-skip-test-example.py","python",[216,1384,1385,1390,1394,1399,1404,1409,1414,1418,1423,1428,1432,1436,1440,1445,1450,1455,1459],{"__ignoreMap":379},[553,1386,1387],{"class":555,"line":556},[553,1388,1389],{},"import pytest\n",[553,1391,1392],{"class":555,"line":380},[553,1393,570],{"emptyLinePlaceholder":400},[553,1395,1396],{"class":555,"line":388},[553,1397,1398],{},"# Skip unconditionally with reason\n",[553,1400,1401],{"class":555,"line":573},[553,1402,1403],{},"@pytest.mark.skip(reason=\"Bug #4521: password reset endpoint returns 500\")\n",[553,1405,1406],{"class":555,"line":579},[553,1407,1408],{},"def test_user_can_reset_password():\n",[553,1410,1411],{"class":555,"line":584},[553,1412,1413],{},"    ...\n",[553,1415,1416],{"class":555,"line":590},[553,1417,570],{"emptyLinePlaceholder":400},[553,1419,1420],{"class":555,"line":828},[553,1421,1422],{},"# Skip conditionally, useful for environment-specific bugs\n",[553,1424,1425],{"class":555,"line":840},[553,1426,1427],{},"@pytest.mark.skipif(os.getenv(\"ENV\") == \"staging\", reason=\"Bug #4521: only affects staging\")\n",[553,1429,1430],{"class":555,"line":846},[553,1431,1408],{},[553,1433,1434],{"class":555,"line":1012},[553,1435,1413],{},[553,1437,1438],{"class":555,"line":1212},[553,1439,570],{"emptyLinePlaceholder":400},[553,1441,1442],{"class":555,"line":1218},[553,1443,1444],{},"# xfail: marks as expected failure, test still runs\n",[553,1446,1447],{"class":555,"line":1224},[553,1448,1449],{},"# Use when you want the test to run but not break the build\n",[553,1451,1452],{"class":555,"line":1256},[553,1453,1454],{},"@pytest.mark.xfail(reason=\"Bug #4521: known failure, fix in progress\")\n",[553,1456,1457],{"class":555,"line":1261},[553,1458,1408],{},[553,1460,1462],{"class":555,"line":1461},17,[553,1463,1413],{},[11,1465,1466,1469],{},[216,1467,1468],{},"pytest.mark.xfail"," is a useful middle ground. The test still runs, but a failure is expected and won't break the build. Use it when you want visibility that the test is currently broken without silencing it entirely.",[11,1471,1050,1472],{},[245,1473],{"href":1474,"text":1475},"https://docs.pytest.org/en/stable/reference/reference.html#pytest.skip","pytest skip reference",[196,1477,1479],{"id":1478},"junit-5","JUnit 5",[544,1481,1486],{"className":1482,"code":1483,"filename":1484,"language":1485,"meta":379,"style":379},"language-java shiki shiki-themes material-theme-lighter github-light-high-contrast github-dark-high-contrast","import org.junit.jupiter.api.Disabled;\nimport org.junit.jupiter.api.Test;\n\n// Skip a single test\n@Disabled(\"Bug #4521: password reset endpoint returns 500, fix pending\")\n@Test\nvoid userCanResetPassword() {\n    // ...\n}\n\n// Skip an entire test class\n@Disabled(\"Bug #4521: all password reset tests affected\")\nclass PasswordResetTests {\n    // ...\n}\n","junit-test-disable.spec.java","java",[216,1487,1488,1493,1498,1502,1506,1511,1516,1521,1526,1531,1535,1540,1545,1550,1554],{"__ignoreMap":379},[553,1489,1490],{"class":555,"line":556},[553,1491,1492],{},"import org.junit.jupiter.api.Disabled;\n",[553,1494,1495],{"class":555,"line":380},[553,1496,1497],{},"import org.junit.jupiter.api.Test;\n",[553,1499,1500],{"class":555,"line":388},[553,1501,570],{"emptyLinePlaceholder":400},[553,1503,1504],{"class":555,"line":573},[553,1505,895],{},[553,1507,1508],{"class":555,"line":579},[553,1509,1510],{},"@Disabled(\"Bug #4521: password reset endpoint returns 500, fix pending\")\n",[553,1512,1513],{"class":555,"line":584},[553,1514,1515],{},"@Test\n",[553,1517,1518],{"class":555,"line":590},[553,1519,1520],{},"void userCanResetPassword() {\n",[553,1522,1523],{"class":555,"line":828},[553,1524,1525],{},"    // ...\n",[553,1527,1528],{"class":555,"line":840},[553,1529,1530],{},"}\n",[553,1532,1533],{"class":555,"line":846},[553,1534,570],{"emptyLinePlaceholder":400},[553,1536,1537],{"class":555,"line":1012},[553,1538,1539],{},"// Skip an entire test class\n",[553,1541,1542],{"class":555,"line":1212},[553,1543,1544],{},"@Disabled(\"Bug #4521: all password reset tests affected\")\n",[553,1546,1547],{"class":555,"line":1218},[553,1548,1549],{},"class PasswordResetTests {\n",[553,1551,1552],{"class":555,"line":1224},[553,1553,1525],{},[553,1555,1556],{"class":555,"line":1256},[553,1557,1530],{},[11,1559,1050,1560],{},[245,1561],{"href":1562,"text":1563},"https://docs.junit.org/6.0.3/writing-tests/disabling-tests.html","JUnit disabling tests",[196,1565,1567],{"id":1566},"nunit-net","NUnit (.NET)",[544,1569,1573],{"className":1570,"code":1571,"language":1572,"meta":379,"style":379},"language-csharp shiki shiki-themes material-theme-lighter github-light-high-contrast github-dark-high-contrast","[Test]\n[Ignore(\"Bug #4521: password reset endpoint returns 500, fix pending\")]\npublic void UserCanResetPassword()\n{\n    // ...\n}\n","csharp",[216,1574,1575,1580,1585,1590,1595,1599],{"__ignoreMap":379},[553,1576,1577],{"class":555,"line":556},[553,1578,1579],{},"[Test]\n",[553,1581,1582],{"class":555,"line":380},[553,1583,1584],{},"[Ignore(\"Bug #4521: password reset endpoint returns 500, fix pending\")]\n",[553,1586,1587],{"class":555,"line":388},[553,1588,1589],{},"public void UserCanResetPassword()\n",[553,1591,1592],{"class":555,"line":573},[553,1593,1594],{},"{\n",[553,1596,1597],{"class":555,"line":579},[553,1598,1525],{},[553,1600,1601],{"class":555,"line":584},[553,1602,1530],{},[11,1604,1050,1605],{},[245,1606],{"href":1607,"text":1608},"https://docs.nunit.org/articles/nunit/writing-tests/attributes/ignore.html","NUnit Ignore attribute",[196,1610,1612],{"id":1611},"xunit-net","xUnit (.NET)",[544,1614,1616],{"className":1570,"code":1615,"language":1572,"meta":379,"style":379},"[Fact(Skip = \"Bug #4521: password reset endpoint returns 500, fix pending\")]\npublic void UserCanResetPassword()\n{\n    // ...\n}\n",[216,1617,1618,1623,1627,1631,1635],{"__ignoreMap":379},[553,1619,1620],{"class":555,"line":556},[553,1621,1622],{},"[Fact(Skip = \"Bug #4521: password reset endpoint returns 500, fix pending\")]\n",[553,1624,1625],{"class":555,"line":380},[553,1626,1589],{},[553,1628,1629],{"class":555,"line":388},[553,1630,1594],{},[553,1632,1633],{"class":555,"line":573},[553,1634,1525],{},[553,1636,1637],{"class":555,"line":579},[553,1638,1530],{},[11,1640,1050,1641],{},[245,1642],{"href":1643,"text":1644},"https://api.xunit.net/v3/3.2.2/v3.3.2.2-Xunit.Assert.Skip.html","xUnit Skip",[196,1646,1648],{"id":1647},"rspec-ruby","RSpec (Ruby)",[544,1650,1654],{"className":1651,"code":1652,"language":1653,"meta":379,"style":379},"language-ruby shiki shiki-themes material-theme-lighter github-light-high-contrast github-dark-high-contrast","# Skip with reason\nit 'allows user to reset password', :skip => 'Bug #4521: password reset returns 500' do\n  # ...\nend\n\n# pending: similar to xfail, marks as pending, body is not executed\npending 'Bug #4521: password reset returns 500' do\n  # ...\nend\n","ruby",[216,1655,1656,1661,1666,1671,1676,1680,1685,1690,1694],{"__ignoreMap":379},[553,1657,1658],{"class":555,"line":556},[553,1659,1660],{},"# Skip with reason\n",[553,1662,1663],{"class":555,"line":380},[553,1664,1665],{},"it 'allows user to reset password', :skip => 'Bug #4521: password reset returns 500' do\n",[553,1667,1668],{"class":555,"line":388},[553,1669,1670],{},"  # ...\n",[553,1672,1673],{"class":555,"line":573},[553,1674,1675],{},"end\n",[553,1677,1678],{"class":555,"line":579},[553,1679,570],{"emptyLinePlaceholder":400},[553,1681,1682],{"class":555,"line":584},[553,1683,1684],{},"# pending: similar to xfail, marks as pending, body is not executed\n",[553,1686,1687],{"class":555,"line":590},[553,1688,1689],{},"pending 'Bug #4521: password reset returns 500' do\n",[553,1691,1692],{"class":555,"line":828},[553,1693,1670],{},[553,1695,1696],{"class":555,"line":840},[553,1697,1675],{},[11,1699,1050,1700],{},[245,1701],{"href":1702,"text":1703},"https://rspec.info/features/3-12/rspec-core/pending-and-skipped-examples/","RSpec pending and skipped examples",[196,1705,1707],{"id":1706},"nightwatch","Nightwatch",[544,1709,1714],{"className":1710,"code":1711,"filename":1712,"language":1713,"meta":379,"style":379},"language-javascript shiki shiki-themes material-theme-lighter github-light-high-contrast github-dark-high-contrast","module.exports = {\n  '@disabled': true, // This will prevent the test module from running.\n  \n  'sample test': function (browser) {\n    // test code\n  }\n};\n","nightwatch-skip-pattern.js","javascript",[216,1715,1716,1732,1754,1759,1784,1789,1794],{"__ignoreMap":379},[553,1717,1718,1722,1724,1727,1730],{"class":555,"line":556},[553,1719,1721],{"class":1720},"sPxkN","module",[553,1723,227],{"class":597},[553,1725,1726],{"class":1720},"exports",[553,1728,1729],{"class":633}," =",[553,1731,920],{"class":597},[553,1733,1734,1737,1741,1743,1745,1749,1751],{"class":555,"line":380},[553,1735,1736],{"class":607},"  '",[553,1738,1740],{"class":1739},"sqmHM","@disabled",[553,1742,608],{"class":607},[553,1744,762],{"class":597},[553,1746,1748],{"class":1747},"sTqCK"," true",[553,1750,617],{"class":597},[553,1752,1753],{"class":559}," // This will prevent the test module from running.\n",[553,1755,1756],{"class":555,"line":388},[553,1757,1758],{"class":593},"  \n",[553,1760,1761,1763,1766,1768,1770,1773,1776,1779,1782],{"class":555,"line":573},[553,1762,1736],{"class":607},[553,1764,1765],{"class":1739},"sample test",[553,1767,608],{"class":607},[553,1769,762],{"class":597},[553,1771,1772],{"class":620}," function",[553,1774,1775],{"class":597}," (",[553,1777,1778],{"class":1100},"browser",[553,1780,1781],{"class":597},")",[553,1783,920],{"class":597},[553,1785,1786],{"class":555,"line":579},[553,1787,1788],{"class":559},"    // test code\n",[553,1790,1791],{"class":555,"line":584},[553,1792,1793],{"class":597},"  }\n",[553,1795,1796],{"class":555,"line":590},[553,1797,1798],{"class":597},"};\n",[544,1800,1803],{"className":1710,"code":1801,"filename":1802,"language":1713,"meta":379,"style":379},"describe('homepage test with describe', function() {\n  \n  // skipped testcase: equivalent to: test.skip(), it.skip(), and xit()\n  it.skip('async testcase', async browser => {\n    const result = await browser.getText('#navigation');\n    console.log('result', result.value)\n  });\n});\n","nightwatch-skip-describe-style.js",[216,1804,1805,1827,1831,1836,1865,1900,1930,1939],{"__ignoreMap":379},[553,1806,1807,1809,1811,1813,1816,1818,1820,1822,1825],{"class":555,"line":556},[553,1808,946],{"class":600},[553,1810,604],{"class":593},[553,1812,608],{"class":607},[553,1814,1815],{"class":611},"homepage test with describe",[553,1817,608],{"class":607},[553,1819,617],{"class":597},[553,1821,1772],{"class":620},[553,1823,1824],{"class":597},"()",[553,1826,920],{"class":597},[553,1828,1829],{"class":555,"line":380},[553,1830,1758],{"class":1170},[553,1832,1833],{"class":555,"line":388},[553,1834,1835],{"class":559},"  // skipped testcase: equivalent to: test.skip(), it.skip(), and xit()\n",[553,1837,1838,1841,1843,1845,1847,1849,1852,1854,1856,1858,1861,1863],{"class":555,"line":573},[553,1839,1840],{"class":593},"  it",[553,1842,227],{"class":597},[553,1844,601],{"class":600},[553,1846,604],{"class":1170},[553,1848,608],{"class":607},[553,1850,1851],{"class":611},"async testcase",[553,1853,608],{"class":607},[553,1855,617],{"class":597},[553,1857,621],{"class":620},[553,1859,1860],{"class":1100}," browser",[553,1862,627],{"class":620},[553,1864,920],{"class":597},[553,1866,1867,1870,1874,1876,1879,1881,1883,1886,1888,1890,1893,1895,1897],{"class":555,"line":579},[553,1868,1869],{"class":620},"    const",[553,1871,1873],{"class":1872},"sQ79N"," result",[553,1875,1729],{"class":633},[553,1877,1878],{"class":836}," await",[553,1880,1860],{"class":593},[553,1882,227],{"class":597},[553,1884,1885],{"class":600},"getText",[553,1887,604],{"class":1170},[553,1889,608],{"class":607},[553,1891,1892],{"class":611},"#navigation",[553,1894,608],{"class":607},[553,1896,1781],{"class":1170},[553,1898,1899],{"class":597},";\n",[553,1901,1902,1905,1907,1910,1912,1914,1917,1919,1921,1923,1925,1928],{"class":555,"line":584},[553,1903,1904],{"class":593},"    console",[553,1906,227],{"class":597},[553,1908,1909],{"class":600},"log",[553,1911,604],{"class":1170},[553,1913,608],{"class":607},[553,1915,1916],{"class":611},"result",[553,1918,608],{"class":607},[553,1920,617],{"class":597},[553,1922,1873],{"class":593},[553,1924,227],{"class":597},[553,1926,1927],{"class":593},"value",[553,1929,640],{"class":1170},[553,1931,1932,1935,1937],{"class":555,"line":590},[553,1933,1934],{"class":597},"  }",[553,1936,1781],{"class":1170},[553,1938,1899],{"class":597},[553,1940,1941,1943,1945],{"class":555,"line":828},[553,1942,930],{"class":597},[553,1944,1781],{"class":593},[553,1946,1899],{"class":597},[11,1948,1050,1949],{},[245,1950],{"href":1951,"text":1952},"https://nightwatchjs.org/guide/running-tests/skipping-disabling-tests.html","Nightwatch skipping and disabling tests",[43,1954],{},[46,1956,1958],{"id":1957},"common-mistakes-when-disabling-tests-for-known-bugs","Common Mistakes When Disabling Tests for Known Bugs",[303,1960,1961,1967,1973,1979],{},[144,1962,1963,1966],{},[147,1964,1965],{},"Don't comment out."," Invisible in reports, won't surface in any tracking system, and goes stale silently as the codebase changes around it.",[144,1968,1969,1972],{},[147,1970,1971],{},"Don't delete."," The coverage is gone permanently. Someone has to rewrite the test from scratch when the bug is fixed, assuming anyone remembers it existed.",[144,1974,1975,1978],{},[147,1976,1977],{},"Don't leave it failing."," A red build everyone ignores is a disabled alarm. When a real regression slips through, nobody notices.",[144,1980,1981,1984,1985,1988],{},[147,1982,1983],{},"Don't skip without a reason."," A bare ",[216,1986,1987],{},"test.skip('user can reset password')"," with no context is almost as bad as a comment. There's no ticket reference, no way to know why it was skipped, and no path back to re-enabling it.",[43,1990],{},[46,1992,1994],{"id":1993},"conclusion","Conclusion",[11,1996,1997],{},"The skip pattern costs almost nothing to apply. It takes maybe thirty seconds longer than commenting out. What it buys you is a test that stays in the codebase, stays visible in reports, stays linked to the bug that caused it, and is right there waiting to be re-enabled when the fix lands.",[11,1999,2000],{},"The ticket reference is what closes the loop. Without it, skipped tests are marginally better than commented-out ones, still visible but still forgotten. With it, restoring the test becomes a natural last step in fixing the bug rather than something that has to be remembered.",[375,2002],{":items":2003},"[\"/software-testing/test-automation/playwright-accessibility-testing-axe-lighthouse-limitations\",\"/software-testing/test-automation/best-websites-for-practicing-test-automation\"]",[2005,2006,2007],"style",{},"html pre.shiki code .s_gjE, html code.shiki .s_gjE{--shiki-light:#90A4AE;--shiki-light-font-style:italic;--shiki-default:#66707B;--shiki-default-font-style:inherit;--shiki-dark:#BDC4CC;--shiki-dark-font-style:inherit}html pre.shiki code .sZ-rw, html code.shiki .sZ-rw{--shiki-light:#90A4AE;--shiki-default:#0E1116;--shiki-dark:#F0F3F6}html pre.shiki code .sPJuK, html code.shiki .sPJuK{--shiki-light:#39ADB5;--shiki-default:#0E1116;--shiki-dark:#F0F3F6}html pre.shiki code .sb1SK, html code.shiki .sb1SK{--shiki-light:#6182B8;--shiki-default:#622CBC;--shiki-dark:#DBB7FF}html pre.shiki code .sZi47, html code.shiki .sZi47{--shiki-light:#39ADB5;--shiki-default:#032563;--shiki-dark:#ADDCFF}html pre.shiki code .srGNg, html code.shiki .srGNg{--shiki-light:#91B859;--shiki-default:#032563;--shiki-dark:#ADDCFF}html pre.shiki code .stWsX, html code.shiki .stWsX{--shiki-light:#9C3EDA;--shiki-default:#A0111F;--shiki-dark:#FF9492}html pre.shiki code .sE6rD, html code.shiki .sE6rD{--shiki-light:#39ADB5;--shiki-default:#A0111F;--shiki-dark:#FF9492}html .light .shiki span {color: var(--shiki-light);background: var(--shiki-light-bg);font-style: var(--shiki-light-font-style);font-weight: var(--shiki-light-font-weight);text-decoration: var(--shiki-light-text-decoration);}html.light .shiki span {color: var(--shiki-light);background: var(--shiki-light-bg);font-style: var(--shiki-light-font-style);font-weight: var(--shiki-light-font-weight);text-decoration: var(--shiki-light-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html pre.shiki code .s2xgV, html code.shiki .s2xgV{--shiki-light:#90A4AE;--shiki-light-font-style:italic;--shiki-default:#702C00;--shiki-default-font-style:inherit;--shiki-dark:#FFB757;--shiki-dark-font-style:inherit}html pre.shiki code .sq0XF, html code.shiki .sq0XF{--shiki-light:#E53935;--shiki-default:#0E1116;--shiki-dark:#F0F3F6}html pre.shiki code .sPxkN, html code.shiki .sPxkN{--shiki-light:#39ADB5;--shiki-default:#023B95;--shiki-dark:#91CBFF}html pre.shiki code .sqmHM, html code.shiki .sqmHM{--shiki-light:#E53935;--shiki-default:#032563;--shiki-dark:#ADDCFF}html pre.shiki code .sTqCK, html code.shiki .sTqCK{--shiki-light:#FF5370;--shiki-default:#023B95;--shiki-dark:#91CBFF}html pre.shiki code .sQ79N, html code.shiki .sQ79N{--shiki-light:#90A4AE;--shiki-default:#023B95;--shiki-dark:#91CBFF}html pre.shiki code .sZTni, html code.shiki .sZTni{--shiki-light:#39ADB5;--shiki-light-font-style:italic;--shiki-default:#A0111F;--shiki-default-font-style:inherit;--shiki-dark:#FF9492;--shiki-dark-font-style:inherit}html pre.shiki code .saWzx, html code.shiki .saWzx{--shiki-light:#E53935;--shiki-default:#024C1A;--shiki-dark:#72F088}",{"title":379,"searchDepth":380,"depth":380,"links":2009},[2010,2015,2019,2022,2033,2034],{"id":429,"depth":380,"text":430,"children":2011},[2012,2013,2014],{"id":442,"depth":388,"text":443},{"id":478,"depth":388,"text":479},{"id":499,"depth":388,"text":500},{"id":521,"depth":380,"text":522,"children":2016},[2017,2018],{"id":662,"depth":388,"text":663},{"id":702,"depth":388,"text":703},{"id":725,"depth":380,"text":726,"children":2020},[2021],{"id":735,"depth":388,"text":736},{"id":879,"depth":380,"text":880,"children":2023},[2024,2025,2026,2027,2028,2029,2030,2031,2032],{"id":883,"depth":388,"text":884},{"id":1062,"depth":388,"text":1063},{"id":1283,"depth":388,"text":1284},{"id":1376,"depth":388,"text":1376},{"id":1478,"depth":388,"text":1479},{"id":1566,"depth":388,"text":1567},{"id":1611,"depth":388,"text":1612},{"id":1647,"depth":388,"text":1648},{"id":1706,"depth":388,"text":1707},{"id":1957,"depth":380,"text":1958},{"id":1993,"depth":380,"text":1994},"/images/posts/how-to-handle-failing-tests-caused-by-known-bugs/how-to-handle-failing-tests-caused-by-known-bugs-cover.webp","2026-04-16","When a test fails due to a known bug that can't be fixed immediately, commenting it out is the wrong move. Here's the right pattern, with skip syntax for every major test framework.",{},"/software-testing/test-automation/how-to-handle-failing-tests-caused-by-known-bugs",{"title":408,"description":2037},"software-testing/test-automation/how-to-handle-failing-tests-caused-by-known-bugs","IJMbyLVkM-296RYnBQXsC4cH_RdGhwDsXMMNJtdQDfs",{"id":2044,"title":2045,"bmcUsername":6,"body":2046,"cover":2405,"date":2406,"description":2407,"draft":397,"extension":398,"features":6,"githubRepo":6,"headline":6,"highlight":6,"icon":6,"meta":2408,"navigation":400,"npmPackage":6,"order":6,"path":2409,"seo":2410,"stem":2411,"__hash__":2412},"content/software-testing/test-automation/ai-in-testing-2026-state-of-the-industry.md","AI in Testing is Here — But Is Your Test Automation Stuck in Someone Else's Platform?",{"type":8,"value":2047,"toc":2394},[2048,2059,2063,2066,2072,2075,2079,2082,2208,2229,2233,2236,2296,2299,2306,2313,2317,2320,2323,2340,2344,2347,2351,2354,2361,2365,2368,2371,2375,2378,2381,2384,2391],[11,2049,2050,2051,2054,2055,2058],{},"Recently I attended the QA Leadership Summit 2026, ",[15,2052,2053],{},"Leading the AI-Native Transformation",", a vendor session hosted by BrowserStack focused on the future of AI in testing. I went in with healthy skepticism and patience knowing I'd be a captive audience for commercials for their AI solutions, but hopeful I'd still come away with vendor-agnostic information as well as knowledge of what BrowserStack is up to in that space. One of the valuable pieces of information they shared was their ",[15,2056,2057],{},"State of AI Testing 2026"," report they compiled by surveying engineering teams on how AI is actually being used today in their testing workflows and where they feel it is going tomorrow. We'll dig into it here. In addition they demoed their AI testing suite which seemed cohesive, but one question nobody in the room asked kept nagging at me: what happens to all that AI-generated test automation if you ever leave BrowserStack?",[46,2060,2062],{"id":2061},"the-state-of-ai-testing-in-2026-by-the-numbers","The State of AI Testing in 2026 — By the Numbers",[11,2064,2065],{},"Their survey underscored AI adoption in testing is no longer on the horizon. It's here. 61% of the companies they surveyed already are using AI in the majority of their testing workflows. 32% of companies are using it for select testing uses. 5% of surveyed companies were curious or still in an exploration phase of AI testing adoption while only 1% responded that they had no plans to use AI for testing at all. That is a rapid uptake from just the prior year.",[2067,2068],"pie-chart",{"labels":2069,"title":2070,"values":2071},"Used on majority of workflows,Select workflows,Considering use,No plans","Surveyed Company AI Usage","61,32,5,1",[11,2073,2074],{},"So, by the numbers, 93% of companies are actively using AI in many or select workflows in 2026. Based on personal experience, I can see why. When paired with a skilled test engineer it can multiply productivity significantly allowing the tester to focus on less repetitive work and more creative work. This is a similar shift going from manual testing to automated. Or going from doing math by hand to using a calculator.",[46,2076,2078],{"id":2077},"where-teams-are-actually-using-ai-today","Where Teams Are Actually Using AI Today",[11,2080,2081],{},"Of the surveyed teams BrowserStack found AI was most used for repetitive tasks. I find this relatable — I often offload repetitive, otherwise time-consuming tasks, freeing me up to either get more done or more creative work done.",[2083,2084,2085,2098],"table",{},[2086,2087,2088],"thead",{},[2089,2090,2091,2095],"tr",{},[2092,2093,2094],"th",{},"Percent",[2092,2096,2097],{},"Task",[2099,2100,2101,2110,2121,2129,2137,2145,2153,2161,2169,2177,2185,2192,2200],"tbody",{},[2089,2102,2103,2107],{},[2104,2105,2106],"td",{},"62%",[2104,2108,2109],{},"Generating test cases",[2089,2111,2112,2115],{},[2104,2113,2114],{},"58%",[2104,2116,2117,2118],{},"Generating test ",[15,2119,2120],{},"data",[2089,2122,2123,2126],{},[2104,2124,2125],{},"57%",[2104,2127,2128],{},"Maintaining tests",[2089,2130,2131,2134],{},[2104,2132,2133],{},"51%",[2104,2135,2136],{},"Authoring tests",[2089,2138,2139,2142],{},[2104,2140,2141],{},"49%",[2104,2143,2144],{},"Optimizing test suites",[2089,2146,2147,2150],{},[2104,2148,2149],{},"48%",[2104,2151,2152],{},"Optimizing performance tests",[2089,2154,2155,2158],{},[2104,2156,2157],{},"43%",[2104,2159,2160],{},"Predictive analytics",[2089,2162,2163,2166],{},[2104,2164,2165],{},"42%",[2104,2167,2168],{},"Visual testing",[2089,2170,2171,2174],{},[2104,2172,2173],{},"41%",[2104,2175,2176],{},"Finding and predicting defects",[2089,2178,2179,2182],{},[2104,2180,2181],{},"35%",[2104,2183,2184],{},"Accessibility testing",[2089,2186,2187,2189],{},[2104,2188,2181],{},[2104,2190,2191],{},"Determining test priority / scheduling",[2089,2193,2194,2197],{},[2104,2195,2196],{},"32%",[2104,2198,2199],{},"Cross browser or mobile app testing",[2089,2201,2202,2205],{},[2104,2203,2204],{},"31%",[2104,2206,2207],{},"Analyzing test results and automated reporting",[11,2209,2210,2211,2214,2215,2218,2219,2222,2223,2228],{},"I'm surprised accessibility testing was only 35%. I suspect that might be because companies without public facing websites tend to have less immediate ",[15,2212,2213],{},"legal"," reason to invest in A11y testing and others tend to deprioritize it relative other testing until it becomes a must have requirement for a customer, differentiator over a competitor, or a legal concern arises. Regardless, it makes your site more useable for everyone ",[15,2216,2217],{},"and"," makes it more testable so I feel strongly it should be a standard and there are already so many tools to catch low hanging issues. That being said, in my own experience, coupling AI with Playwright's ",[216,2220,2221],{},"@axe-core/playwright"," assertions allowed me to build out some pretty amazing AI workflows — similar in spirit to ",[2224,2225,2227],"a",{"href":2226},"/software-testing/frameworks/nightwatch/implementing-a-minimum-accessibility-test-plan","getting started with accessibility testing",", but now driven by AI rather than manual setup. When combined with the context from the Deque website and the ADO MCP skill I was able to create tickets, with screenshots, impact, and remediation information for developers that was on par with some of the results we received from paid accessibility audits by LevelAccess.",[46,2230,2232],{"id":2231},"what-ai-means-for-the-qa-role","What AI Means for the QA Role",[11,2234,2235],{},"BrowserStack surveyed companies to see how they think AI will change the role of quality assurance engineers. The findings are below:",[2083,2237,2238,2247],{},[2086,2239,2240],{},[2089,2241,2242,2244],{},[2092,2243,2094],{},[2092,2245,2246],{},"Evolution",[2099,2248,2249,2257,2265,2273,2280,2288],{},[2089,2250,2251,2254],{},[2104,2252,2253],{},"50%",[2104,2255,2256],{},"Increased need to upskill around AI and AI-tooling",[2089,2258,2259,2262],{},[2104,2260,2261],{},"46%",[2104,2263,2264],{},"More focus on test strategy, planning, and oversight versus hands on test writing",[2089,2266,2267,2270],{},[2104,2268,2269],{},"44%",[2104,2271,2272],{},"Increased collaboration with development and AI/ML teams",[2089,2274,2275,2277],{},[2104,2276,2165],{},[2104,2278,2279],{},"Shift toward data analysis and interpreting AI output",[2089,2281,2282,2285],{},[2104,2283,2284],{},"36%",[2104,2286,2287],{},"Role will become more quality coaching and advisement",[2089,2289,2290,2293],{},[2104,2291,2292],{},"27%",[2104,2294,2295],{},"Less need for testing roles",[11,2297,2298],{},"To me, this points to fewer QA positions overall. When the role shifts toward strategy, coaching, and AI oversight rather than hands-on test writing, you simply need fewer people to do it — one advisor can serve multiple teams in a way that one test author cannot. The positions that remain will be filled by those with strong QA fundamentals, deep familiarity with AI tooling, and the communication skills to operate at that advisory level.",[11,2300,2301,2302,2305],{},"There have been some interesting trends before AI became ubiquitous that may temper or shape this. Early in my career there was a higher emphasis on testing methodologies and fundamentals. When open-source automated testing tools came along it seemed like fundamentals dropped off in favor of knowing how to write tests in Selenium, Playwright, Cypress, Postman, etc., but those same people never learned how to find a bug, pick apart a requirement, or write a ",[15,2303,2304],{},"good"," test case — just write tests of dubious quality. Now, AI can do that, so skill gaps in fundamentals will become apparent very quickly.",[11,2307,2308,2309,2312],{},"On a more positive note, the other recent trend, shift-left, has already reduced ",[15,2310,2311],{},"tester"," counts (in general), but the shift-left benefits aren't always actualized, quality suffers, and there are fewer core testers left to do the same or greater amount of work. If the testing is already understaffed, the productivity gains from AI may help make up the deficit by allowing the existing testers to be more productive.",[46,2314,2316],{"id":2315},"what-a-fully-ai-integrated-testing-workflow-looks-like","What a Fully AI-Integrated Testing Workflow Looks Like",[11,2318,2319],{},"During the keynote they used the opportunity to show the audience their suite of AI tools in their ideal AI automated testing workflow.",[11,2321,2322],{},"The demo consisted of:",[141,2324,2325,2328,2331,2334,2337],{},[144,2326,2327],{},"A hypothetical bug ticket ingested by BrowserStack",[144,2329,2330],{},"Their AI generated plain language test cases in their test case management system product",[144,2332,2333],{},"Their AI then turned those into automated tests",[144,2335,2336],{},"The tests were run",[144,2338,2339],{},"The results were summarized in their dashboard",[196,2341,2343],{"id":2342},"the-good","The Good",[11,2345,2346],{},"I saw an earlier demo where the AI just stopped at the plain text test case generation. You still had to write the automated test cases. This was cohesive and end-to-end.",[196,2348,2350],{"id":2349},"healthy-skepticism","Healthy Skepticism",[11,2352,2353],{},"This was a highly curated test site so it's really hard to say how this would work in a real application. Further, there were no repeat runs conducted so for all we know it could be unreliable or edited. There were no speed comparisons done on their AI solution versus a well-written Playwright test for example. Their automation stack was opaque — I didn't hear mention if it was a closed source proprietary thing or if it used a standard framework to drive the automation.",[11,2355,2356,2357,2360],{},"Last, the report/analysis ",[15,2358,2359],{},"looks"," like it has a lot of useful information, but it isn't a great report in practice. This is the same complaint I have with the Test Analytics product they currently offer. It's kind of hard to explain, but using it every day is just very limiting, sluggish, and hard to pull useful information out of.",[46,2362,2364],{"id":2363},"the-question-nobody-at-the-summit-asked-data-portability","The Question Nobody at the Summit Asked: Data Portability",[11,2366,2367],{},"My biggest concern nobody asked about was test portability. The plain text test cases could be exported, but if you need to leave BrowserStack or they exit the market, what happens to your test automation? You lose the investment? Back to manual testing?",[11,2369,2370],{},"This has all the same concerns I have with going with proprietary, vendor-locked, solutions (lack of extensibility/customization, cost) but worse. It feels like you are renting the tests in the BrowserStack AI solution paradigm.",[46,2372,2374],{"id":2373},"my-honest-take","My Honest Take",[11,2376,2377],{},"It was good to see the AI trending data. I found that to be useful, but the summit was front-loaded with a lot of marketing in the first 15-20 minutes about BrowserStack in general before that information was shared followed by essentially a commercial for their AI testing product.",[11,2379,2380],{},"AI testing isn't coming. It's here. I don't see the value in the BrowserStack AI curated suite at this stage. I prefer the flexibility and extensibility of using Claude, for example, to develop our own agents and combining them with other open-source frameworks to create test cases we own. From there we can execute locally or on BrowserStack's TurboScale or Automate infrastructure, with results feeding into their Test Analytics product for historical trend tracking and failure triage.",[11,2382,2383],{},"For other less technical companies with simpler web-properties that want a walled garden ecosystem it may be worth evaluating the BrowserStack suite if they are ok with the limitations.",[11,2385,2386,2387,2390],{},"No doubt, we will see a necessary upskill in our profession similar to when manual testing was ",[15,2388,2389],{},"replaced"," by automated testing. Developers are using AI to generate code at a pace that needs to be matched by testing. Quality assurance will return to core fundamentals to be able to leverage AI to write good test cases to ensure solid coverage allowing QA to spend more time on creative bugs found by really understanding the business domain, systems, and trends.",[375,2392],{":items":2393},"[\"/software-testing/test-automation/ai-test-automation-pitfalls-vs-user-error\",\"/software-testing/test-automation/automated-api-testing-with-schemathesis\"]",{"title":379,"searchDepth":380,"depth":380,"links":2395},[2396,2397,2398,2399,2403,2404],{"id":2061,"depth":380,"text":2062},{"id":2077,"depth":380,"text":2078},{"id":2231,"depth":380,"text":2232},{"id":2315,"depth":380,"text":2316,"children":2400},[2401,2402],{"id":2342,"depth":388,"text":2343},{"id":2349,"depth":388,"text":2350},{"id":2363,"depth":380,"text":2364},{"id":2373,"depth":380,"text":2374},"/images/posts/ai-in-testing-2026-state-of-the-industry/state-of-ai-testing-2026-cover.webp","2026-03-11","BrowserStack's State of AI Testing 2026 report shows 93% of companies using AI in testing workflows. Here's what the data means for QA engineers and vendor lock-in.",{},"/software-testing/test-automation/ai-in-testing-2026-state-of-the-industry",{"title":2045,"description":2407},"software-testing/test-automation/ai-in-testing-2026-state-of-the-industry","-j4sfXyxIAdukWzHKBg-FdYu_Rgufze6qbXZDuXitLA",1778979576143]