LOL on an observation from yesterday's #TestingDozen. "We just used an hour in jira documenting testing *without doing any testing*."
Well, a decent sample of how "testing" I dislike gets done, without a care of where the time goes on whether it actually drives forward testing.
Session 20 of #TestingDozen done, and one more session to go. Today we talked of Jira and Jira Xray.
A few of the group have found jobs, and a few are looking for jobs - happy to make connections to people I have spent Wednesday evenings with, both aspiring testers and programmers in the group.
All things we've worked our way through are summed on our pages: https://testing-dozen.github.io
My 6 months weekly with #TestingDozen is approaching the end, and I am supposed to make decisions on what next.
Would I run a second #TestingDozen, this time on mentoring people who already work in testing but lack mentorship that stretches their skills in testing?
Would I run just sessions on various topics instead of focused program?
Would I stop this form of socialising and write more?
Would I work more and stop this volunteering?
Would I just stop figuring out how we grow testing?
If you look at time you spend on learning, what is the distribution over time on struggle and time on progress. I have, with #TestingDozen, become an increasingly big believer in the idea that struggle is better when shared, and that particularly for beginners some of the things we struggle with aren't worth the struggle as the guide (another person) could significantly help in lowering the struggle.
#TestingDozen today on CI pipelines and even after 18 sessions, there is still more content to teach. I have come to appreciate the group for allowing me to test my group mentoring and activity-based learning approaches on them, and they make me proud. Some by showing clear growth throughout the program, and some by getting employed as testers already during the program. There are 12 of them, so in case you know of beginner testing positions, I would be happy to introduce these folks.
#TestingDozen session 13 and we named features because seeing it precedes testing it. There was so much more depth in testing when the test target was not a target-rich (read: buggy) and I am of the opinion that better software makes a better teaching instrument while worse software makes a better motivational instrument for #testing.
We also took tally on skills we have been learning. My tally has no mentions of #python libraries we have been using while at it because tools are secondary.
#Python #testing #testingdozen
I will however be again away from "invoicable work". There's #FutureCreators23, #7NFreelancerNetworking #ScanAgile, #CraftConf and #EuroSTAR and my own #TestingDozen on my list until June.
Learning a lot while speaking and teaching is not away from the project work. Managers could do better in enabling this.
#testingdozen #eurostar #craftconf #scanagile #7nfreelancernetworking #futurecreators23
While I recognise I have teaching value with the #TestingDozen, I am also realizing I have value in holding space for pairing. There was so much positive yesterday in people getting to pair with people that I think we just need more of that. The #TestingTour idea of initiating pairing by social media connections is a much harder route. Thoughts on scaling as goal.
Also #TestingDozen happy news: one of the twelve has 2nd round interview for a testing position tomorrow. We are all rooting for our colleague and wishing she lands a job to get space to practice all the skills she has been tuning in last months.
Post-session #TestingDozen "fix my environment" and lovely sample of windows going wild with forked processes. No wonder there is no CPU and memory when there is tens of browsers of each type, tens of discords and tens of dockers. What caused that - no clue yet. But clearly it blocks running test target when you have no spare resources. :D
#TestingDozen session 11 was on REST POST and PUT with authentication in pytest fixtures, and reading java tests to collect ideas of what to test in python tests. First steps to polyglot taken.
https://testing-dozen.github.io
Next up is:
- adding coverage
- reproducing bugs we found without automation with API automation
- structure, structure, structure
- contributing to multiple people shared test repo
And probably we need to make our tests fail prettier if no system is available.
Also two pieces of happy news with #TestingDozen.
One in the group landed an interview. Already a step further and I am doing every superstitious good luck wish gesture I can. I would be happy to speak for the group's motivation and progress having watched them do this for 10 weeks.
Another in the group who is already working is running now a junior support group in their organization.
We are all better together.
#TestingDozen completed the Roman Numerals test target. In the process of doing so, I learned that 4=>IIII for clocks, 5=> IIIII and 50=> XXXXX for tombstones and 4000=> M(V) because there is no real reason to cap it at 3999. The asserts, approvals, properties are available now with the exercise sample project. https://github.com/exploratory-testing-academy/do-a-thing-and-call-it-foo-solution
Creative Commons Zero license for exercises is intentional.
#TestingDozen teaches ME roman numerals on MY exercise. 😂
I did not know that in domain of *high-end watches* 4 => IIII instead of IV. Now I know.
Testers are emerging, great work.
#TestingDozen teaches ME roman numerals on MY exercise. 😂
I did not know that in domain of *high-end watches* 4 => IIII instead of IV. Now I know.
Testers are emerging, great work.
#TestingDozen worked today on basic tools of exploring APIs, starting from GET to 200 response and looking at contents (and headings), discussing things that stay and change between calls. Introducing this in python as note taking tool is just as straightforward as introducing this in postman or something with a UI.
#TestingDozen is 2 hours weekly for 6 months, which is an equivalent of 50 hours of training, 8.5 training days. Training design to increase participant success with the time invested is a puzzle. I've chosen us to learn hands-on, and pull concepts as we have experienced them.
Ensembling and pairing with everyone in the #TestingDozen has given me a sense of the trainee's levels of digital literacy. Reading this one article suggesting that *same level of digital literacy* in a learning group may be critical to the success of the group, I find this fascinating to observe some more.
Teaching #TestingDozen and trying to guide them to discovering functionality, they ended up overwhelmed with bugs. The whole 'testing is not about hunting bugs' might be a reminder that we need to work on skills to appreciate features to see what the system can even be relevantly buggy on. See it work before seeing it fail.
Telling that my #TestingDozen's results with their second ever application show "good oracle skills" is a great compliment of their results.