Test Coast 2024
Meet the speakers
SESSIONS 2024
Kohsuke Kawaguchi
📝 Bio
Kohsuke is co-founder and co-CEO at Launchable. Famous for creating Jenkins, he is passionate about solving problems developers face every day. As CTO of CloudBees, he worked with Harpreet to create the Jenkins business and grew the team to over 400 people. Before joining CloudBees, Kawaguchi was with Sun Microsystems and Oracle, where he worked on a variety of projects and initiated the open-source work that led to Jenkins. He is an O’Reilly open-source Award recipient, JavaOne Rockstar, Japan OSS Contributor Award recipient, and a Rakuten Technology Award recipient.
Test Coast 2024 had a focus on the Test craft and on AI and Test, but this year we also added a new focus area in Test and DevOps and Kohsuke ties these three topics together in a very nice way. He is a well-respected developer and popular speaker at industry and Jenkins community events. Kawaguchi’s flair for creating Jenkins and his deep understanding of the challenges faced by software development teams everywhere has now led him to take on new challenges around using AI/ML to intelligently select test cases for more efficient execution, monitor test case health over time and analyze the large amounts of test data and results generated.
🎤 Session: “Future of CI/CD: Testing.next”
The last decade has seen a relentless push to deliver software faster. Automated testing has emerged as one of the most important technologies for scaling software delivery. In this presentation, Kohsuke Kawaguchi, creator of Jenkins and co-founder of Launchable, shares insights into emerging trends and practices in the testing space, including where AI/ML is getting deployed. This session will highlight innovations in testing and future approaches that are emerging for those looking to nurture a continuous quality culture.
Jakub Piasek & Katja Meyer
📝 Bio
Jakub, as a control engineering student with a passion for automation and precision, embarked on his career journey in the software industry as a software tester. His initial role allowed him to develop a keen eye for detail and a deep understanding of software quality.
Over time, Jakub recognized the increasing demand for bridging the gap between development and operations. Driven by the desire to streamline the software development process, he transitioned into the dynamic world of DevOps.
As a DevOps engineer (from around 5 years), Jakub now play a pivotal role in optimizing the software development lifecycle. He facilitates seamless collaboration between development and operations teams, implement automation solutions, and orchestrate the delivery pipeline. Jakubs background in control engineering enables him to leverage his skills in a new context, ensuring that the software delivery process is both efficient and reliable.
Katja, originally holding a PhD in molecular biology, changed career paths a couple of years ago and started her journey into the testing world. She is currently working as a test consultant and testing team lead at QualityMinds. In addition to digging her way deeper into the realm of test automation, she is also passionate about learning new testing techniques and onboarding new testers.
🎤 Session: “Journey through DevOps and Testing Synergy”
In their presentation, we’ll embark on a journey to explore the synergy between DevOps and testing, shedding light on their intertwined roles in modern software development. Whether you’re new to DevOps or a seasoned professional, we’ll break down key concepts, address the linkage with Agile development, and delve into the benefits, real-world applications, and lessons learned.
Lena "pejgan" nyström
📝 Bio
Lena has been building software since 1999, starting out as a developer. She found her passion in testing a decade later and has been focusing on quality in software since then. The last few years have shifted to building organizations and growing people rather than the software itself, but she is still an active voice, and force, in the testing community.
Her core drive is continuous improvement, and she strongly believes we all should strive to challenge ourselves, our assumptions and the way things are done.
She is the author and creator of “Would Heu-risk it?” (card deck and book), an avid blogger, international keynote speaker and workshop facilitator. Oh, and her day job is as Engineering Manager where the combination of skills learned are put to work on her teams and the engineering department.
🎤 Session: “Delivering Fast and Slow“
Delivering something new, better, faster than our competition can mean incredible payoff and we are constantly being asked to cut costs and deliver more, faster, cheaper. But then suddenly, you fall off the other side of the edge and wake up to 189 dead in a plane crash or having to take down and redesign your entire banking service because the architecture didn’t hold up to the load. It probably wasn’t your decision to push that to production but one can imagine that a long chain of people have to have made a number of small (or huge) decisions that led up to that result. So, where do we draw the line? Do we let that potential risk slip by even if we know it might potentially cause someone to lose time, money, or even health? What are we, as individuals, responsible for and how much can we hide behind the chain of command?
We will explore the ethics of software development, focusing on quality in general and testing in particular. We will look at what costs the context switching between solving a problem and finding the gaps in the solution will add to software development and why testing is so much more than automation and scripts.
We will talk about the relationship between tester and developer, how delivering feedback and embracing critique can strengthen that bond and how the right question in the right room at the right time might save you from drowning in angry customer calls further down the line.
We will delve into a number of interesting bugs and loopholes to discuss what can be learned from them and how to make sure that at the end of the day, we will sleep soundly, knowing we made our choices not because they were easy but because we believe them to be right.
Qunying Song
📝 Bio
Qunying Song is a PhD student in Computer Science at Lund University, Sweden. His research focuses on identifying critical scenarios for testing autonomous driving systems using simulation and optimization techniques.
Qunying received his bachelor’s and master’s degrees in Computer Software Development from Kristianstad University in 2012 and 2013, respectively. Before starting his PhD studies, he worked in the industry as a software developer for six years.
🎤 Session: “Critical scenario identification for testing autonomous driving systems”
Autonomous driving systems have to be tested thoroughly and rigorously to validate their functionalities and safety, particularly in hazardous situations known as critical scenarios. However, identifying these critical scenarios for testing remains a significant challenge. In this presentation, I will introduce an integrated toolchain that we have developed for identifying critical scenarios and its application for testing realistic autonomous driving systems in collaboration with Volvo Cars. Additionally, I will present contemporary practices in using critical scenarios for testing autonomous driving systems, synthesized from interviews with 13 domain experts from 7 autonomous driving companies in Sweden. Current industrial practices suggest that critical scenario identification, as an essential aspect of testing autonomous driving systems, is still in its early stages and requires improvement. Progress in this area relies on combining available approaches and fostering increased collaboration among various stakeholders from both industry and academia.
Jonathon Wright
📝 Bio
Jonathon Wright is a strategic thought leader and distinguished technology evangelist. He specializes in emerging technologies, innovation, and automation, and has more than 25 years of international commercial experience within global organizations. Jonathon combines his extensive practical experience and leadership with insights into real-world adoption of Cognitive Engineering (Generative AI). In his spare time he is a member of Harvard Business Council, A.I. Alliance for the European Commission, chair of the review committee for the ISO-IEC 29119 part 8 “Model-Based Testing” and part 11 for the “Testing of A.I. based systems” for the British Computer Society (BCS SIGiST). Jonathon also hosts the QA lead (based in Canada) and the author of several award-winning books (2010 – 2022) the latest with Rex Black on ‘AI for Testing’.
🎤 Session: “AI-Augmented Testing:
How Generative AI and Prompt Engineering Turn
Testers into Superheroes, Not Replace Them”
Envision a testing realm infused with AI-driven superpowers, where a reliable AI companion revolutionizes the way, we approach testing. This companion doesn’t just assist; it transforms the adventure of creating test cases that span the full spectrum of possibilities and pinpointing bugs in the most unexpected places. This is the essence of AI-augmented testing, a paradigm where AI elevates testers to maestros, orchestrating a symphony of enhanced augmented test intelligence. By turbocharging the testing process to be quicker, smarter, and more efficient, this approach doesn’t merely impart insights but equips you with actionable strategies to bring these innovative concepts to life, making your work not only more productive but also genuinely enjoyable and engaging.
But this session goes beyond merely discussing the technological underpinnings (e.g., GNN, RAG, RGA, NLU). It’s a rallying cry for test professionals everywhere to embrace an era where their skills are amplified to superhero proportions. Through a blend of real-world anecdotes and hands-on demonstrations, participants will gain firsthand experience in leveraging generative AI for crafting exhaustive manual test cases, automating intricate testing scenarios, and achieving bug detection with unprecedented accuracy. We’ll delve deep into the art of prompt engineering and fine-tuning, demonstrating how to design precise prompts that guide Generative AI in executing highly specialized testing tasks, thus opening new horizons in test engineering prowess.
Get ready to be motivated, to absorb knowledge, and to witness the future of testing—a future where you’re not merely adapting but thriving, propelled by the avant-garde wave of AI-Augmented Testing. This session is your gateway to joining an elite community at the forefront of the GAI revolution, setting new standards for what it means to be a tester in the digital age. Embark on this voyage to the cutting edge of testing and seize the opportunity to define the next era of test assurance!
WORKSHOPS 2024
Jonathon Wright
📝 Bio
Jonathon Wright is a strategic thought leader and distinguished technology evangelist. He specializes in emerging technologies, innovation, and automation, and has more than 25 years of international commercial experience within global organizations. Jonathon combines his extensive practical experience and leadership with insights into real-world adoption of Cognitive Engineering (Generative AI). In his spare time he is a member of Harvard Business Council, A.I. Alliance for the European Commission, chair of the review committee for the ISO-IEC 29119 part 8 “Model-Based Testing” and part 11 for the “Testing of A.I. based systems” for the British Computer Society (BCS SIGiST). Jonathon also hosts the QA lead (based in Canada) and the author of several award-winning books (2010 – 2022) the latest with Rex Black on ‘AI for Testing’.
🛠️ Workshop: “AI-Augmented Testing: A Hands-On Workshop on Generative AI, Prompt Engineering Tuning, and Beyond RAG”
This immersive workshop is designed for QA professionals, testers, and developers who are eager to leverage the cutting-edge capabilities of AI to enhance their testing strategies, automate processes, and uncover bugs with unparalleled accuracy. By blending theoretical knowledge with practical exercises, participants will gain firsthand experience in harnessing the power of AI-Augmented Testing to transform their approach to testing, making it more efficient, effective, and innovative. Throughout the workshop, attendees will explore the core concepts of generative AI and its application in testing, including how to generate comprehensive test cases that cover every conceivable scenario and how to use AI to identify bugs in the most unexpected places. The workshop will also delve into the nuanced art of prompt engineering tuning, teaching participants how to craft effective prompts that guide AI in performing highly specialized testing tasks. Through a series of hands-on exercises, interactive sessions, and live demonstrations, attendees will learn how to integrate these AI-powered tools and techniques into their daily workflows, elevating the quality of their testing and the products they help to create.
Robert Hennersten-Manley & Pierre Alenbrink
📝 Bio
Rob is an experienced tester with nearly two decades in varying roles from Test Analyst to Programme Test Manager. Rob’s career started in the UK Finance Sector but since moving to Sweden in 2022 has been working in the Railway Industry. Rob is passionate and enthusiastic about the end customer, having spent time in Customer Services and getting to know the real users, this has shaped his outlook to testing. Whilst his experience at a range of roles enables a multidimensional perspective; he sees the big picture—from intricate technical details to overarching project goals.
Pierre is a passionate and comitted tester with over ten years experience with many different roles such as test engineer, test coordinator and test strategist. The majority of his career has been spent testing safety-critical systems within the Railway Industry but also with a few assignments within the MedTech business. Pierre is a context-driven type of tester who usually finds himself asking questions like “who?” “why?” and “what?” a lot. That is probably the reason why you often find him testing the requirements at an early stage trying to identify the missing, unclear or vague parts.
🛠️ Workshop: “How do I test this? – Dealing with confusing requirements”
“They want to build what?”, “These requirements make no sense!”, “We can’t test this!”
We often encounter requirements that are unclear, confusing, or even contradictory. Through the exploration of an experimental new product, this workshop help you develop your skills when dealing with such challenges. The teams will tackle a common task and share their approaches. The intended outcome is for participants to:
• Build confidence and learn new methods for dealing with challenging situations.
• Understand when testing can start adding value.
• Learn insightful questions to ask at the start of project.
• Broaden perspectives by learning how others approach the same task.
• Strengthen your sense of empowerment as a tester.
Rich Jordan & George Blundell
📝 Bio
Rich Jordan has reached over 20 years in testing, mainly within complex financial services environments. Coming from a more technical testing background, he has been heavily involved in many transformation initiatives, focusing on automation, performance, test data and AI. Rich’s experience ranges from setting up teams and frameworks, to crafting enterprise strategies for making test teams DevOps-ready. Rich’s team leadership has been recognised by five industry awards in a single year, as his teams have been awarded in both testing and DevOps categories. Rich has learned many lessons along the way about what does and does not work, and has shared these insights globally at testing and DevOps conferences.
George Blundell is a Solutions Engineer at Curiosity, where he collaborates closely with organizations on high-priority test automation and requirements engineering initiatives. George implements and supports model-based, AI-assisted approaches to generating, optimizing, and maintaining rigorous automated tests and accurate requirements. His work supports close collaboration between stakeholders from across the development lifecycle, emphasising test coverage, automation, and collaboration between humans “in the loop”.
🛠️ Workshop: “From Zero to Hero in Model-Based Testing”
This practice-based workshop will take you from understanding the value and applications of model-based testing, to auto-generating coverage-optimised tests for different system types. Join us to discover why model-based testing is the test design technique of choice for fast-changing, complex systems, and equip your CV with skills to test tomorrow’s vastly complicated systems. You will learn how to use collaborative, visual diagramming to remove requirements defects before they are written into code, understand how mathematical coverage techniques create the smallest set of test cases needed to “cover” fast-changing requirements, and discover how updating central models automatically refactors automation test scripts and data as systems and requirements change. Through case studies and hands-on exercises, you will learn to apply MBT to UIs and APIs, gaining practical experience to realise the value of model-based testing at your organisation.
Test coast 2025 – Call for papers
Interested in becoming a speaker on our next conference? Send your information to us and we will get back to you soon. We look forward to receiving your submission for talks, workshops and hands-on sessions.