TestBash Brighton 2024
Organized by:

Ministry of Testing
> 100k followersVenue:
Brighton Dome
Dates:
Thu, 12 Sep 2024
Fri, 13 Sep 2024
We’re shaking things up and bringing TestBash back to Brighton on September 12th and 13th, 2024.
Not only that, we’re seeing that AI is changing the way we think and approach our testing.
We will have one day dedicated to all things AI and Testing.
- TestBash Day 1: Software Testing + AI
- TestBash Day 2: Other hot software testing topics
TestBash Brighton will have the extra special homecoming vibes, taking us back to where our testing magic flourished. We’re using it as an opportunity to refresh how we approach our conferences.
Over the years conferences have intensified and focused efforts on speakers as the main contributors. We’re changing this and recognising that it takes many people to make a conference experience feel worthwhile and one to remember.
We no longer solely have a call to speak, we have a call to contribute. There are many ways to contribute to TestBash and we massively appreciate them all.
_______________________
Before, during and after TestBash, there will be a lot of opportunities to network and socialise with the community.
What's happening:
Pre-TestBash Meetup
- When: Wednesday, the 11th of September, from 6 PM to 9 PM
- Where: Bison Beach Bar - Address: 300 Madeira Dr, Brighton BN2 1BX (Google Maps link)
- What: This event is kindly sponsored by our fantastic sponsors Keysight.
- Important: Registrations are free and limited to 130, so make sure you REGISTER HERE as soon as possible before they all go! đïž
TestBash Social
- When: Thursday, the 12th of September, from 7 PM onwards
- Where: In Brighton, venue soon to be confirmed
- What: At this social event, there will be plenty of networking opportunities, games, music and more! More details coming soon.
Post-TestBash Social
- When: Friday, the 13th of September, from 6:30 PM onwards
- Where: In Brighton, venue soon to be confirmed
- What: It's a Friday after all! Make sure to book your travel back home for Saturday and join other TestBashers in a post-TestBash social. Share what you've learned, your key takeaways, have a laugh and say your farewells or until the next one!
-
09:50
How Generative AI Works (a Very Rough Guide)
with Jarsto van Santen
Let's build a language model together! Think it's impossible in the timeframe? Well, it's not going to be a general purpose Large Language Model. In fact, it's going to be a highly limited, incredibly Tiny Language Model. It's not about what the Language Model can do in this case, it's about what building it will show us.
I believe that to make truly effective use of a tool we need to have a rough understanding of how it works. But for many testers and automators LLMs seem to be a black box: we don't know how they work, so we're not sure when to use them or not to use them.Making a TLM based on audience input is just one of the ways in which this talk aims to give a rough guide to how Generative AI works. So that we don't end up doing the equivalent of trying to push a nail into a wall with a screwdriver, or using a chainsaw instead of sandpaper.
What youâll learn
- How to effectively use generative AI
- Why generative AI struggles with some tasks
- What we can and can't fix with better prompting
-
10:40
Quality Statements for LLMs: The Good, The Bad and The Ugly
with Bastian Knerr and Dr. Niels Heller
AI as a buzzword is everywhere. It will steal our jobs, make us all obsolete and in the end: It will rule the world. We've been experiencing a shift in paradigms for two years and, most prominently, Large Language Models like LLaMA, ChatGPT or BARD are re-shaping industries and our everyday lives.
Using a Co-Pilot for Coding or Testing is seen as enhancing production and lowering barriers to entry.
But now that the uses of these LLMs are increasing rapidly:- Who is testing them?
- And what actually is Quality in the age of AI?
In this talk, I want to provide results from my experience in projects of testing Large Language Models and regressive AI. I will explain the high-level function of a Large Language Model.I will translate the components of a Copilot onto a newly thought testing pyramid from the component level to the system level. Now that we have a sort of framework to test LLMs, I will outline the metrics used and why testers will still be needed in the age of AI - maybe even more than ever.
What youâll learn
- Learn how a Large Language Model works on a high level and possible pitfalls for testing
- Discover a high-level standardized approach to testing Large Language Models
- Understand a new testing pyramid: What's the component level in LLM systems?
- What is quality in the age of AI? What metrics can we use - and how contextual are they?
- Understand the importance of a tester's perspective and why testers will still be important going forward
-
11:55
Enhancing Test Automation with Playwright and AI: A Journey of Innovation
with Christine Pinto
In an era where test automation and AI are reshaping the software development landscape, my talk, "Enhancing Test Automation with Playwright and AI: A Journey of Innovation," offers a dive into my personal evolution within this dynamic space. As an Automation Engineer with over 15 years in the tech industry, I've witnessed first-hand the transformative power of cutting-edge technologies. My recent foray into using Playwright, coupled with AI, marked a pivotal shift in my approach to test automation, programming language transitions, and debugging methodologies.
This presentation will chart the course of my journey from initial experimentation with Playwright for Chrome extension testing to the strategic integration of AI tools like ChatGPT. I'll share how these technologies revolutionized my workflow, from effortlessly transitioning code snippets between JavaScript and TypeScript to generating robust test case scenarios and navigating complex debugging scenarios with unprecedented ease. The focus will not only be on the technicalities but also on the practical implementation and the tangible benefits observed in real-world projects.
Beyond personal insights, the session aims to equip participants with actionable strategies and innovative approaches to leverage Playwright and AI in their test automation endeavours. By dissecting challenges, solutions, and learnings, the talk will inspire attendees to explore new horizons in automation, enhancing their skill set and propelling their projects forward in the rapidly evolving digital landscape.
What youâll learn
- Gain insights into the transition process from traditional automation frameworks to Playwright, including the challenges and benefits encountered
- Learn how AI tools, particularly ChatGPT, can be utilized for enhancing test case generation, facilitating language transitions, and streamlining debugging processes in test automation
- Discover actionable strategies for integrating Playwright and AI into your testing workflows, enhancing efficiency, and effectiveness based on real-world project examples
- Understand the role of AI in easing the transition between programming languages within the context of test automation, making your scripts more robust and maintainable
- Explore innovative approaches to debugging test scripts with the aid of AI, reducing time and effort in identifying and resolving issues
- Be inspired to stay adaptive and forward-thinking in your approach to test automation, embracing new tools and methodologies to stay competitive and innovative
-
12:45
PâAIâr With Peers: Optimise Your pAIr Testing Game
with Ashutosh Mishra
Have you ever been intimidated by any of the below - joining a new team or organization; working on a totally different Technology Stack; or working in an organization where testers are working in silos oblivious of the bigger picture impacting overall quality.
Well, the answer to all these can be in the practice of pair testing and doing it right when practising it with peers.
When I moved to a new continent, to join new teammates working in completely different Technologies - the introduction of pairing made my life easier. So, how did I do it?
Introduced the pairing practice to the culture of the Quality team.This helped me to:
- understand Tech stack, work culture, product and processes
- grasp the length and breadth of the Tech stack by daring to pair with everyone: Engineering Manager, Product Owner, Developers and of course Testers in other teams
But, naturally, there was resistance initially.
In this talk, I share ways of introducing pair testing to teams and organizations. For example, start with scheduling a Pair testing session with peers at least once a week, which can be open for anyone to join in the organization. In my experience, it has been so impactful that it gets people curious to join initially and eventually makes them comfortable with the idea.
Everything sounds easier said than done until now; what if your peers have tight deadlines and developers are busy? In that case, let's turn to AI tools. These can be a game changer in pAIring and assisting. In my talk I will also discuss how these tools can be leveraged and speed up testing in multiple ways.
Join me in this talk as I share my experiences of pair testing with my peers as well as AI. And let's delve into the idea if the future of pair testing is only with AI.What youâll learn
- The different ways in which Pair testing can be included in your Test Strategy
- The knowledge of the tools which can be good pair testing companions with AI
- The optimal ways of Pair testing with Testers, Developers, peers in the team and AI
-
14:30
Responsible AI: Opportunities and Challenges for Testers
with Bill Matthews
As the use of AI in decision-making systems skyrockets, companies are under mounting pressure to ensure their AI deployments uphold ethical standards and inspire trust. Recent moves by many countries to introduce regulation targeting the use of AI is adding further pressure and the need for companies to show compliance.
Responsible AI is an emerging framework of principles and practices aimed at fostering the development of ethical, human-centered AI systems that prioritize robustness, reliability and trustworthiness.
This talk unpacks the essence of Responsible AI, shedding light on its core practices and highlighting the pivotal role testers can play building responsible and trustworthy AI systems.
While this is a great opportunity for testers it is not without its challenges and we will dive the obstacles and mindset shifts testers must navigate to collaborate in building Responsible AI systems.
We will end this talk with a recommended roadmap for testers who want to contribute to responsible AI systems.
What youâll learn
- Appreciate what Responsible AI is and isnât
- Understand the role of testing and testers with Responsible AI
- How to develop your own roadmap to get involved
-
15:20
Beyond Creation: Leveraging AI in Test Automation to Solve the Right Problems
with Titus Fortner
Successful test automation implementations are notoriously difficult, and likely even the majority of teams are not getting the minimum necessary value from their efforts. Companies are increasingly turning to Artificial Intelligence (AI) as the answer, but so far the focus has been on making it easier to create the tests themselves. This can be useful, but it does not address the actual bottleneck: the requirement to maintain accurate test results over time. We need to focus on how AI can help testers rather than replace them.
In this talk, Titus will discuss the limitations of AI in its current form, and highlight data from multiple studies and surveys relating to how developers are actively using Large Language Models (LLMs) to identify their strengths and weaknesses. It is commonly known that LLMs hallucinate, so, similar to how testers are responsible for verifying the quality of the application they are testing, testers also need to verify the quality of the AI output in their workflows.
This talk will use the Selenium repository as an example to show how ChatGPT was used to automate the complicated Selenium build and release process. It will show how LLM tooling can provide value for both code generation and code management
The bottom line â to harness the full potential of AI in test automation, we must shift our focus from generating tests to empowering testers. By doing so, we can address the real problems facing test automation today, ensuring more sustainable and effective outcomes.
ÂWhat youâll learn
- Understanding the limitations of current AI implementations in test automation and the need to shift focus towards empowering testers
- Exploring the role of Large Language Models (LLMs) in identifying strengths and weaknesses in test automation processes
- Learning how AI, particularly LLMs, can be leveraged for both code generation and code management in test automation workflows, using the Selenium repository as a case study
-
16:35
The Essential Human Element in AI-Integrated Quality Assurance
with Natalia Petrovskaia
In the dynamic landscape of software testing, AI and automation have become increasingly prominent. However, this presentation delves into the critical issue of the industry's growing over-dependence on these technologies. It brings into focus the indispensable value of human testers, shedding light on their crucial role in deciphering complex systems and enhancing user experiences. Human testers' ability to pinpoint subtle, context-specific bugs and their comprehensive approach to ensuring software quality are often beyond the reach of AI.
Throughout the talk, we will unravel this theme by showcasing real-world case studies. These examples will vividly illustrate the limitations of AI in testing scenarios, particularly where intricate understanding and adaptability are required. The discussion will emphasize how AI, while beneficial in managing repetitive and straightforward tasks, lacks the nuanced judgment and creative problem-solving that human testers inherently possess.
By advocating for a balanced approach, the talk will argue for the integration of AI tools to augment efficiency but not at the expense of human expertise. The goal is to demonstrate that AI computational power is not going to replace human insights and a wise approach is needed to uphold the highest standards in software testing. This perspective aims to guide industry professionals in remembering that setting right tasks for AI is still human work.What youâll learn
- Understand AI's limitations in software testing, emphasizing the irreplaceable role of human intuition and expertise
- Learn the importance of human-led testing in identifying complex, nuanced issues that AI may overlook
- Discover strategies for balancing AI tools with human testing to maximize efficiency and effectiveness
-
17:25
Panel Discussion
More details about this Panel Discussion will be coming soon.
-
18:15
99-Second Talks
It's not a TestBash without 99 Second Talks!
The 99 Second Talks is the attendee's stage, an opportunity for you to come on stage and talk for, that's right, 99 seconds.
You can talk about anything, a testing topic you want to share, a personal experience, or an idea sparked by all the amazing talks, workshops, activities and conversations you've had for the past two days... the stage is yours, for 99 seconds!
This is also a great opportunity for you to kick-start your public speaking experience and/or give it a boost!
Our host will introduce you on stage and start the clock. As soon as the time's up, a noise will be heard and that's it: time's up!
What youâll learn
- Contribute with your knowledge
- Share your testing stories
- Practise public speaking
- Learn directly from your peers
-
09:50
Testing and Evaluating LLM Applications
with Bill Matthews
The introduction of Large Language Models (LLMs) such as Chat-GPT continues to have a significant impact in many domains and enables companies to build applications that use LLMs and are part of the core decision-making process.
In this practical workshop, we will dive into the challenging world of testing and evaluating such LLM Apps.
The session will start with a short introduction to what LLM Applications are and how they are built before diving into a risk-based approach to testing and evaluating LLMs through a set of hands-on exercises and discussions.
- General and Domain/Task specific Risk Identification for LLM Apps
- The relationship between Evaluation and Testing<
- Practical approaches and methods for Evaluating and Testing LLMs
- Building test approaches for LLMs
By the end of the session, you will have a framework for thinking about and building test approaches for LLM Applications.
The session will include a demo LLM application to explore so access to an internet-connected device is recommended but not essential.
Beyond that, just bring your curiosity and critical thinking skills.
What youâll learn
- Understand the different types of technical risks that impact LLM applications
- Learn how to design effective evaluations for LLM applications
- Experience building a Risk-Based test approaches for LLMs
-
11:35
Use Generative AI To Collate and Visualise Multiple Sources
with Katy Bradshaw and Scott Hackeson
This workshop will look to work through different scenarios, gathering text and images from websites and collating them into one source. An example test case will exist and with the use of GitHub Co-pilot, you will learn how to aid the creation of additional scenarios.
Alongside your preferred IDE, GitHub CoPilot and Microsoft Fabric (Processing, masking sense and augmenting generative AI), there will be two different technology options to choose from on the day:
Option A- OpenAI / Azure ML studio /Azure tools
- https://learn.microsoft.com/en-us/azure/ai-services/openai/dall-e-quickstart?tabs=dalle3%2Ccommand-line&pivots=programming-language-studio
Option B- Google Cloud Platform tools
- https://cloud.google.com/vertex-ai/docs/generative-ai/image/generate-images
The workshop will be split into 3 parts:- Scraping and collating the data
- Creating prompts to produce images based on the collated data
- Testing the implementation and increasing coverage
What youâll learn
- Be familiar with Azure/GCP terms
- Understand how existing data can be transformed into meaningful content
- Be more confident in working with AI and cloud data
- Ability to determine how the solution(s) could be generated in a repeatable mannor
- Understand of how to provide test coverage in the AI space
-
14:05
AI As Your Pythonic Testing Assistant
with Michal Pilarski and Mateusz Adamczak
Modern software testing demands creativity. In this context, fusing the power of Artificial Intelligence (AI) with Python versatility could really improve software quality. Automated software testing, just like another form of writing programs, very often still is just typing text into an IDE. Recently, with the development of LLMs (Large Language Models), new AI tools arrived which could help us with the process. Integrating OpenAI ChatGPT or Google Bard (Gemini) with Python (PyTest) would inspire testers and finally result in added value in the testing process.
Together with workshop participants, teachers would like to go through:- Static testing of software requirements in PyTest using ChatGPT and Bard (Gemini)
- Dynamic unit testing of simple Python function in PyTest using ChatGPT and Bard (Gemini)
- Simple functional testing of web application in PyTest using ChatGPT and Bard (Gemini)
Letâs evaluate if AI is really an effective bug hunter. Does it accelerate the testing process or introduce an element of creativity, which improves conventional methods? Or maybe it is not a revolution and just only another testing tool? The workshop will provide the answers.
What youâll learn
- Gather knowledge of testing automation in Python (PyTest)
- Getting familiar with AI plugins in Python IDLE (Code Editor)
- Recognise pros and cons of ChatGPT and Bard (Gemini) in testing process
-
16:35
Security with Generative AI
with Jack Harris and Ryan Lobo
Generative AI is transforming the way we interact with technology, but with great power comes great responsibility - ensuring the security and robustness of your generative AI applications is paramount.Â
Join Jack and Ryan from Google in this interactive workshop where they'll be exploring the critical security threats to your generative AI-powered solutions, as well as the defence mechanisms to use to mitigate those threats. They'll be walking through prompt injection attacks, sensitive information disclosure, insecure output handling and excessive agency, and touching on risks such as training data poisoning and model theft.
Get hands-on with some of the latest technology from Google, but no prior experience in software engineering, cloud computing, security or generative AI is required as we'll primarily be using natural language to compromise some running applications. Â
What youâll learn
- Identify the threats to your generative AI application
- Experience practical examples of how to test for vulnerabilities in generative AI applications
- Understand defence mechanisms and mitigations to those threats and vulnerabilities
-
11:25
RiskStorming
with Beren Van Daele
The RiskStorming session format is a wonderful way of generating a visible Test Strategy as a team that focuses your strategy to answer the following questions:
-
What is important to our product?
-
What risks could impact these important aspects?
-
What can we do, as a team, to make sure they don't happen?
Instructions:
-
Understand the product under test well enough
-
Take the 25 Quality Aspect TestsSphere cards and pick the 6 most important ones within 10 minutes
-
Take sticky notes and add 2-3 risks per chosen TestSphere card within 10 minutes
-
Take the other TestSphere cards and match risk-mitigating activities for each Risk within 10 minutes
Wrap-up:
By the end of the activity, youâll have a much better understanding of what quality actually means for your product or epic as well as what could potentially harm it. Not just you, but everyone on the team understands how they can prepare, defend and explore against risks which could severely impact the project. Quality becomes clear to everyone and everyoneâs responsibility.
Resources:
-
-
11:25
API Collaboration and Testing with Postman
with Danny Dainton
In this session, you will learn the different ways you could collaborate on an API in Postman with your team or users. Weâll walk you through a suite of features that improve team productivity, reduce onboarding time, and make your API more discoverable and easy to collaborate on. Weâll also cover API testing in Postman, demonstrating how you can build a robust test suite for your APIs. You'll author some post-response scripts, automate tests, and dynamically control workflows using the Collection Runner.
Instructions:
-
Login to your Postman account at go.postman.co
-
Go to the Test Bash workspace at go.pstmn.io/testbash-2024
-
Fork the âAPI Collaboration and Testingâ collection to your Workspace
-
Follow the instructions in the Collection documentation and let us know if you have any questions.
Key Takeaways:
-
Enhanced API collaboration capabilities for API producers and consumers.
-
Strategies to improve team productivity and reduce onboarding time.
-
Techniques for making APIs more discoverable and easier to work with.
Resources:
-
Postman Learning Center helps you learn more about specific Postman features.
-
The Postman Community Forum is a great place to connect with Postman's community and seek help when you need it.
-
We have tons of video resources on YouTube.
-
Learn about our latest feature releases in this blog post by Postmanâs CEO, Abhinav Asthana.
-
Weâre always listening! If you want to request a new feature or report a bug, please submit an issue here. Our Community Forum remains the best place to share general feedback with us.
-
-
13:30
RiskStorming
with Beren Van Daele
The RiskStorming session format is a wonderful way of generating a visible Test Strategy as a team that focuses your strategy to answer the following questions:
-
What is important to our product?
-
What risks could impact these important aspects?
-
What can we do, as a team, to make sure they don't happen?
Instructions:
-
Understand the product under test well enough
-
Take the 25 Quality Aspect TestsSphere cards and pick the 6 most important ones within 10 minutes
-
Take sticky notes and add 2-3 risks per chosen TestSphere card within 10 minutes
-
Take the other TestSphere cards and match risk-mitigating activities for each Risk within 10 minutes
Wrap-up:
By the end of the activity, youâll have a much better understanding of what quality actually means for your product or epic as well as what could potentially harm it. Not just you, but everyone on the team understands how they can prepare, defend and explore against risks which could severely impact the project. Quality becomes clear to everyone and everyoneâs responsibility.
Resources:
-
-
14:00
API Collaboration and Testing with Postman
with Danny Dainton
In this session, you will learn the different ways you could collaborate on an API in Postman with your team or users. Weâll walk you through a suite of features that improve team productivity, reduce onboarding time, and make your API more discoverable and easy to collaborate on. Weâll also cover API testing in Postman, demonstrating how you can build a robust test suite for your APIs. You'll author some post-response scripts, automate tests, and dynamically control workflows using the Collection Runner.
Instructions:
-
Login to your Postman account at go.postman.co
-
Go to the Test Bash workspace at go.pstmn.io/testbash-2024
-
Fork the âAPI Collaboration and Testingâ collection to your Workspace
-
Follow the instructions in the Collection documentation and let us know if you have any questions.
Key Takeaways:
-
Enhanced API collaboration capabilities for API producers and consumers.
-
Strategies to improve team productivity and reduce onboarding time.
-
Techniques for making APIs more discoverable and easier to work with.
Resources:
-
Postman Learning Center helps you learn more about specific Postman features.
-
The Postman Community Forum is a great place to connect with Postman's community and seek help when you need it.
-
We have tons of video resources on YouTube.
-
Learn about our latest feature releases in this blog post by Postmanâs CEO, Abhinav Asthana.
-
Weâre always listening! If you want to request a new feature or report a bug, please submit an issue here. Our Community Forum remains the best place to share general feedback with us.
-
-
16:05
API Collaboration and Testing with Postman
with Danny Dainton
In this session, you will learn the different ways you could collaborate on an API in Postman with your team or users. Weâll walk you through a suite of features that improve team productivity, reduce onboarding time, and make your API more discoverable and easy to collaborate on. Weâll also cover API testing in Postman, demonstrating how you can build a robust test suite for your APIs. You'll author some post-response scripts, automate tests, and dynamically control workflows using the Collection Runner.
Instructions:
-
Login to your Postman account at go.postman.co
-
Go to the Test Bash workspace at go.pstmn.io/testbash-2024
-
Fork the âAPI Collaboration and Testingâ collection to your Workspace
-
Follow the instructions in the Collection documentation and let us know if you have any questions.
Key Takeaways:
-
Enhanced API collaboration capabilities for API producers and consumers.
-
Strategies to improve team productivity and reduce onboarding time.
-
Techniques for making APIs more discoverable and easier to work with.
Resources:
-
Postman Learning Center helps you learn more about specific Postman features.
-
The Postman Community Forum is a great place to connect with Postman's community and seek help when you need it.
-
We have tons of video resources on YouTube.
-
Learn about our latest feature releases in this blog post by Postmanâs CEO, Abhinav Asthana.
-
Weâre always listening! If you want to request a new feature or report a bug, please submit an issue here. Our Community Forum remains the best place to share general feedback with us.
-
-
16:05
RiskStorming
with Beren Van Daele
The RiskStorming session format is a wonderful way of generating a visible Test Strategy as a team that focuses your strategy to answer the following questions:
-
What is important to our product?
-
What risks could impact these important aspects?
-
What can we do, as a team, to make sure they don't happen?
Instructions:
-
Understand the product under test well enough
-
Take the 25 Quality Aspect TestsSphere cards and pick the 6 most important ones within 10 minutes
-
Take sticky notes and add 2-3 risks per chosen TestSphere card within 10 minutes
-
Take the other TestSphere cards and match risk-mitigating activities for each Risk within 10 minutes
Wrap-up:
By the end of the activity, youâll have a much better understanding of what quality actually means for your product or epic as well as what could potentially harm it. Not just you, but everyone on the team understands how they can prepare, defend and explore against risks which could severely impact the project. Quality becomes clear to everyone and everyoneâs responsibility.
Resources:
-
-
09:15
Getting To Know, the Unknown, Unknowns of Quality Coaching
with Emna Ayadi
Although I was extremely motivated while moving from a test lead to a quality coach role, I thought that my experience in testing and my skills as a facilitator and mentor were sufficient to embark on this role, but it was much more challenging than I had expected.
As a quality coach for many teams, I find myself dealing with agile-related problems, testing issues, and DevOps obstacles that were not visible to me when I was a member of one team. I failed for example to clarify and challenge management expectations while improving test coverage for the products under test. I learned that understanding the desired level of detail and making it clear to the team Iâm supporting is essential before starting coaching. Besides that, I found it hard to master my new role and was it hard to provide visible results in a short period of time.
However, I learnt how to make team members aware of why we need these quality improvement actions and I helped them prioritize their backlog...
In a few months, I learned a lot in my new role. In this session, I want to share what I am doing as a quality coach, the struggles that I faced and what I will do differently next time.I will inform you about my strategies to avoid those difficulties before they occur. Besides that, I still see new challenges in my role coming up such as dealing with remote teams that have different backgrounds, coaching in the age of AI, etc.. all kinds of topics that could be a trigger for more discussion while leaving the room.
What youâll learn
- Understand what the quality coach is doing within teams and organisation
- Uncover the unknowns unknowns of quality coaching and learn how to avoid possible difficulties in this role
- Be prepared for future challenges in quality coaching
-
10:05
Fostering Professional Excellence: Self-awareness as a Moral Responsibility
with Barry Ehigiator
We all, often, think of ourselves as great people, unique, and maybe even the best in some sense (for better or worse). However, all these are usually until we need to interact with others.
How often do we get into a new team, or a new organisation and the happiness we usually feel at the start of the adventure start to dwindle over time?
There may be several reasons why this may happen. However, in a professional context, it can be due to the work culture, the leadership structure, the nature of work, and even our fellow colleagues. Yes, our colleagues, because it is always about them and not us, right?
The behavioural and cognitive attributes of professionals are usually not easy to control when recruiting for a position, or even when you are joining a new team. Yet, in today's dynamic professional landscape, achieving excellence and finding joy in our work is not only dependent on technical skills or qualifications; rather it hinges ever more on a deeper understanding of oneself.
In many years of working in various software development teams, I have experienced firsthand, the paramount importance of self-awareness in cultivating a productive and harmonious work environment. Even, some research has indicated that individuals with high levels of self-awareness are better equipped to navigate complex interpersonal dynamics, make informed decisions, and adapt to challenging circumstances. In the context of the workplace, self-awareness enables employees to recognise their unique talents and limitations, leading to improved collaboration, communication, and conflict resolution. Moreover, self-aware professionals are more adept at managing stress, fostering resilience, and maintaining a healthy work-life balance.
In light of the impact our actions and ways of being can have on us, and others, being self-aware becomes an important skill to cultivate as individuals. It is for this reason I say that "self-awareness is a moral responsibility at the workplace". Because, the more self-aware you are, the better colleague you will be to your team members.
The difficult question usually is: how do we cultivate self-awareness?
And, what can you do as someone in a position of leadership to foster a culture that values introspection and self-reflection?
I believe these are difficult, but important questions that need to be further discussed in the software development space. So, in this talk, I wish to share stories from my work-life experience to buttress the transformative power of self-awareness in shaping individuals into skilled professionals, great collaborators, effective leaders, and contributors to a healthy work environment.What youâll learn
- Learn how to cultivate self-awareness
- Respond to individuals that displays a lower level of self-awareness
- Learn how to promote a culture that favours introspection and reflection
-
11:20
Creating Dashboards To Drive Team Conversations
with Melissa Fisher
We often talk about metrics and wonderful dashboards to showcase to stakeholders whatâs going on and provide updates on progress.
Let me provide an alternative view on this. How about creating dashboards that help you as a team have conversations?
I fundamentally believe that metrics are powerful when they can help guide us. Metrics are about data.
- What is the data telling us?
- Do we need to look at different data here?
- Why are we even looking at this at all?
Through this power lens, we can start to think about the questions. What do we need as a team? What do our stakeholders want?
I have been experimenting with these thoughts. One example is tracking quality criteria (functional and non-functional bugs) to see where we are finding bugs, then digesting it to understand what we can learn from this.
I want to share these examples, so you can go away and think about what your own examples could be. Helping you and your team have conversations.
What youâll learn
- Understand how metrics and dashboards can help your team have conversations
- Apply good question asking to uncover what the data is telling you
- Review the data to see if you need a different data set
- Evaluate the current use of metrics and dashboards to see if it adds value
-
12:10
Productising Yourself: Building Portfolio as a Tester
with Rahul Parwal
We all have our own unique testing journeys. Some parts are similar, while others are totally different. Unfortunately, not everyoneâs journey is visible to the outside world.
In today's world, where everything is increasingly visible on digital platforms, a fundamental question emerges: "Is our testing journey visible too?" Surprisingly, only a negligible fraction of testers have an online portfolio today to showcase their skills, capabilities, and contributions. For both novice testers and seasoned professionals, this represents a significant opportunity to define their presence in the professional arena.
During this conference session, I aim to share my personal insights, experiences, and anecdotes acquired through the process of constructing my testing portfolio. I will talk about the art of crafting a compelling testing portfolio that not only distinguishes you but also substantiates your proficiency and trustworthiness.
What youâll learn
- Explore various dimensions and possibilities of testing portfolios
- Leveraging a Portfolio as use that as a stepping stone for success in your career
- Portfolios might be the new work resumes in the times to come. How to take the leap advantage from today
-
14:00
Debugging the Mind: Teaching Developers to Think Like Testers
with Kat Obring
For years now, we've been hearing a lot about the tester's mindset. While it remains as important as ever, I believe it has evolved. It has shifted from being the role of one person in the team who explores the application to find edge cases, thinking like a slightly deranged user, to a role that involves teaching this mindset to others. This includes developers, to help them write better unit and integration tests; product owners, to consider the unintended consequences of insufficiently bold ideas; and delivery leads, who may be hesitant to allocate time for proper test setup in the CI/CD pipeline.
I'll discuss how the tester role has evolved based on my experience and what I believe a modern tester needs to keep in mind to inspire their team to prioritise quality in all development activities. These days, critical thinking alone is no longer sufficient. Helping the team to move quickly while embedding quality into our processes has become more crucial than ever.What youâll learn
- Gain a comprehensive understanding of how the role of testers has evolved to focus on business improvement
- Explore the importance of viewing quality through the lens of the customer, acknowledging that customer feedback is the ultimate measure of product quality
- Get equipped with strategies to nurture and lead a mature quality culture within their teams
-
14:50
Embracing Empathy: A Framework for Modern Test Leadership
with Eirini Kefala
In the complex landscape of software testing, a people-first test manager stands out as a catalyst for positive team dynamics. More than overseeing test procedures, this role is about recognizing and valuing the people within the testing team. An inclusive Test Manager strives to create an environment where team members feel seen, heard, and appreciated. By fostering an inclusive culture that values each team member's unique strengths, a supportive Test Manager not only ensures the effectiveness of testing processes but also contributes to the overall well-being and job satisfaction of the team.
In this presentation, we're going to explore how Test Leaders are not just ensuring the functionality of software but are shaping the culture and success of the entire organization.
What youâll learn
- What are the qualities of a motivational Test Lead / Test Manager
- How a leader balances delivery and culture in the team and the organisation
- Tools for mindful Leadership
-
16:00
How Do You Think Outside of the Black Box? A Celebration of the Creativity of Testing and Testers
with Lina Deatherage
The newest development in technology always affects testing: first, it was the drive to âreplaceâ testing with automation, and now with Artificial Intelligence. Why is the human contribution to testing consistently doubted and misunderstood? As the Ministry of Testing describes, testing is unpredictable and creative in nature. This makes it a uniquely human task, which can leverage but not be replaced by technology.
This talk will celebrate the creative problem-solving that is software testing. I will reflect on my academic and professional experiences in fine art and software testing, highlighting how you can apply the guidance given to creatives to your software testing problems. Weâll discuss how to think out of the box through the âCandle Problemâ, a classic creative problem-solving experiment designed to highlight the cognitive bias of functional fixedness.
I will first highlight the principles shared by testing and traditionally artistic pursuits. This includes verifying that a piece (of software or art) offers its intended value to different stakeholders, and iteratively finding and solving its flaws. I will also discuss how testing and art both explore a product or idea in all its nooks and crannies and how this exploration samples from an infinite range of parameters and values. Art is never finished, only paused; like in testing, you must decide when a piece has achieved its goals.
I will then offer 4 problem-solving practices taught to art students that Iâve found invaluable in my day-to-day work in QA:
- Turn it upside down. Does the problem appear differently from another perspective? What can you see that you couldnât before?
- Look at the bigger picture. Work on a problem, then step back. If our goal is a positive user experience, weâll need to take a step back from tests and sprints to look at the applicationâs broader quality goals.
- Practice telling a story. At art school, we would do workshops where we had 5 minutes to create an impromptu story about an item. The stories did not reflect our technical skills, but rather our ability to visualize and use imagination. My favourite part was seeing the different stories that each person would tell from the same item. This is why itâs important for testers to collaborate within and outside their teams when testing a final product.
- Structure up. If the foundation of your drawing isnât structurally sound, the end result will be poor, no matter how many details you add. The same goes for software applications; it doesnât matter what the latest and greatest feature is if the entire application doesnât work well.
What youâll learn
- An understanding and pride in the creative process that is software testing, equipping you to communicate the unique value of software testing to non-testing stakeholders
- Four techniques taught to artists that you can apply to your software testing practices, with guidance on how to put them into practice
- How both testing and art call us to understand problems from a range of perspectives and angles, emphasing collaboration, story-telling and âbig pictureâ thinking alongside our technical skills
-
16:50
How Applying Critical Thinking Saved My Mental Health
with Antonella Scaravilli
Testing can be a powerful tool to discover information to empower stakeholders to make better, more informed decisions. But what if we need information about our mental health and weâre the stakeholders of our bodies and minds?
My mental health has been stable for so long that I started wondering if I still needed my medication and if it was safe to quit it. I thought that, by experimenting, Iâd find answers to my questions. Can a healthy lifestyle and the tools learned through therapy be enough for me to live a happy life?Designing an experiment
Â
My approach was semi-scientific. I had a lot of questions (my hypothesis) and the purpose of my experiment was to gather information. I made assumptions regarding the outcome and I made a plan: Iâll start training to do the âCamino de Santiagoâ (a pilgrimage) and by working out, my body will get the happy hormones it needs to survive. Little did I know things werenât going to go according to plan.
The tools that helped me
Some testers keep testing notes to not rely on our memory and for more effective debriefing, so I used my bullet journal to keep track of my mood, feelings, and my workout routine. I donât know if Iâd be here today if it wasnât for my mood tracker, which became a solid piece of evidence.
What youâll learn
- Learn how to test different real-life situations that might be affecting your mental health
- Build a support network so they can help you in case you cannot be there for yourself
- How to talk about mental health with your teammates to make life easier for you and them
- Learn how to use a journal to learn more about yourself and your feelings
- Use your role at work to support a colleague in need
-
17:35
99-Second Talks
It's not a TestBash without 99 Second Talks!
The 99 Second Talks is the attendee's stage, an opportunity for you to come on stage and talk for, that's right, 99 seconds.
You can talk about anything, a testing topic you want to share, a personal experience, or an idea sparked by all the amazing talks, workshops, activities and conversations you've had for the past two days... the stage is yours, for 99 seconds!
This is also a great opportunity for you to kick-start your public speaking experience and/or give it a boost!
Our host will introduce you on stage and start the clock. As soon as the time's up, a noise will be heard and that's it: time's up!
What youâll learn
- Contribute with your knowledge
- Share your testing stories
- Practise Public Speaking
- Learn directly from your peers
-
09:15
Navigating the Tool Acquisition Adventure
with Jenna Charlton
Testers often find it challenging to determine if a tool is reputable and communicate with vendors and open-source creators in a way all parties can understand. Even more challenging still is identifying the difference between reality and magical thinking when it comes to what the tool can provide and what can be delivered. Often testers will follow a defined process and complete a proof of concept to help guide their decision making.
But deciding which tools to complete a proof of concept with is just the first step. During the proof of concept documenting your thoughts and leveraging an evaluation matrix is crucial to making that final critical decision of what tool to select. And after selection, the hardest task is left to come, developing your internal champions, and influencing the use of the tool throughout your organization.
Join me as we develop artefacts to take back to your team to put to use on your next tool acquisition. You'll gain valuable insights, best practices, and create your own customized acquisition, evaluation, and integration documentation so your next tool decision will be your best tool decision!
What youâll learn
- Identify use cases and acceptable trade-offs for your team
- Understand evaluation and analysis for decision making
- Develop artefacts and strategies for decision making
-
11:20
Letâs Make Cross-Functional Requirements Inclusive!
with Parveen Khan
Cross-functional requirements (CFRs), most commonly referred to as Non-functional Requirements (NFRs), form an integral part of software quality. And testing for them and making them as part of the team's process is an absolute necessity for any team that promises to deliver high-quality software to their users. Often, the emphasis that is placed on the functional requirements is not equally placed on the cross-functional requirements in software delivery teams and by the business stakeholders. There could be multiple reasons for teams to make this decision but one of the reasons I have seen working on different teams is the lack of awareness on how to approach CFRs testing as they come across as really vague, like say testability or maintainability, for example.
In this workshop, I would like to elaborate on why it is essential to consider testing for cross-functional requirements and introduce an approach to continuous testing of all cross-functional requirements and make it a part of the process. I will cover how to uncover different CFRs, including those really vague ones, and therefore aid in continuous testing and building quality into the software.
This workshop provides the tools and techniques to introduce and implement CFRs within your teams or organizations. I will share how to create your own CFR template and I will share the knowledge you need to facilitate your own sessions with your teams
What youâll learn
- Discover a range of CFRâs and categorize those into groups
- Planning and expanding the CFRâs to think of different ways to question the implementation
- Create your own template
-
14:00
Your Name Here: Starting Your Public Speaking Journey
with Vernon Richards and Karen Tests Stuff
Giving a presentation at a conference is an exciting journey that begins with an idea. This idea shapes an abstract containing teachable messages. Final drafts get sent to conferences near and far - and each stage includes multiple cycles of gathering feedback from others. If youâve been inspired with an idea for a conference talk, but youâre not sure where to begin - this workshop is for you.
In this introductory session, attendees will simulate the process of creating and submitting an abstract paper to appropriate organizations. In groups, the attendees will practice giving and utilizing feedback to improve those papers. The presenters will share insight from two perspectives. One will present as a new speaker navigating the challenges of getting started in public speaking. The other presenter will share more advanced strategies to take your creation to the next level, and tips for avoiding common pitfalls from their wealth of personal experience.
Activities will include:
- Brainstorm topics to write about, then form one idea into a short "elevator pitch" that solves a problem for someone.
- The presenters will share steps to turn that elevator pitch into a written abstract paper.
- The attendees will work in groups to review each others' abstract papers and provide that feedback to each other in a constructive way.
- Workshop topics will include structures for giving and using feedback gathered.
Â
What youâll learn
- Focus a concept into actionable teaching material, within the scope of an abstract paper
- Work together with your peers to gather feedback for improvement along the way
- Discuss how to submit your content to the appropriate organisations
-
16:00
Start Creating Tests Without Waiting When You Understand Everything
with Natalia Petrovskaia
Test engineers often encounter imperfect requirements or their absence, which poses challenges in their testing and makes it difficult to formulate appropriate test cases. In this interactive workshop, we will delve into the realm of various complex requirements and explore how to deal with them using an object-oriented approach.
Through these exercises, participants will sharpen their skills in the creation tests lacking âidealâ requirements, also, we will check why modern AI tools are not replacing humans for really complex work. Additionally, we will discuss how this approach (and its artefacts) can help with test planning and estimation, as well as reporting. We will discover how mind mapping can streamline requirements analysis and test design process, enabling participants to overcome challenges posed by complex requirements and enhance overall test effectiveness.
By the end of the workshop, participants will gain practical experience in creating tests in situations of the absence of requirements, or lack of context and understanding of the system.
What youâll learn
- Discover the power of object-oriented mind maps to efficiently handle complex testing scenarios
- Gain hands-on exercises and participate in interactive activities suitable for both beginners and experienced test engineers
- Learn through real-world examples, gaining skills in test design and requirements analysis that are applicable in various testing challenges
- Master the art of dealing with imperfect requirements
- Improve your abilities in test design and planning equipped to implement these techniques in diverse professional settings
-
10:50
API Collaboration and Testing with Postman
with Danny Dainton
In this session, you will learn the different ways you could collaborate on an API in Postman with your team or users. Weâll walk you through a suite of features that improve team productivity, reduce onboarding time, and make your API more discoverable and easy to collaborate on. Weâll also cover API testing in Postman, demonstrating how you can build a robust test suite for your APIs. You'll author some post-response scripts, automate tests, and dynamically control workflows using the Collection Runner.
Instructions:
-
Login to your Postman account at go.postman.co
-
Go to the Test Bash workspace at go.pstmn.io/testbash-2024
-
Fork the âAPI Collaboration and Testingâ collection to your Workspace
-
Follow the instructions in the Collection documentation and let us know if you have any questions.
Key Takeaways:
-
Enhanced API collaboration capabilities for API producers and consumers.
-
Strategies to improve team productivity and reduce onboarding time.
-
Techniques for making APIs more discoverable and easier to work with.
Resources:
-
Postman Learning Center helps you learn more about specific Postman features.
-
The Postman Community Forum is a great place to connect with Postman's community and seek help when you need it.
-
We have tons of video resources on YouTube.
-
Learn about our latest feature releases in this blog post by Postmanâs CEO, Abhinav Asthana.
-
Weâre always listening! If you want to request a new feature or report a bug, please submit an issue here. Our Community Forum remains the best place to share general feedback with us.
-
-
13:00
API Collaboration and Testing with Postman
with Danny Dainton
In this session, you will learn the different ways you could collaborate on an API in Postman with your team or users. Weâll walk you through a suite of features that improve team productivity, reduce onboarding time, and make your API more discoverable and easy to collaborate on. Weâll also cover API testing in Postman, demonstrating how you can build a robust test suite for your APIs. You'll author some post-response scripts, automate tests, and dynamically control workflows using the Collection Runner.
Instructions:
-
Login to your Postman account at go.postman.co
-
Go to the Test Bash workspace at go.pstmn.io/testbash-2024
-
Fork the âAPI Collaboration and Testingâ collection to your Workspace
-
Follow the instructions in the Collection documentation and let us know if you have any questions.
Key Takeaways:
-
Enhanced API collaboration capabilities for API producers and consumers.
-
Strategies to improve team productivity and reduce onboarding time.
-
Techniques for making APIs more discoverable and easier to work with.
Resources:
-
Postman Learning Center helps you learn more about specific Postman features.
-
The Postman Community Forum is a great place to connect with Postman's community and seek help when you need it.
-
We have tons of video resources on YouTube.
-
Learn about our latest feature releases in this blog post by Postmanâs CEO, Abhinav Asthana.
-
Weâre always listening! If you want to request a new feature or report a bug, please submit an issue here. Our Community Forum remains the best place to share general feedback with us.
-
-
15:30
API Collaboration and Testing with Postman
with Danny Dainton
In this session, you will learn the different ways you could collaborate on an API in Postman with your team or users. Weâll walk you through a suite of features that improve team productivity, reduce onboarding time, and make your API more discoverable and easy to collaborate on. Weâll also cover API testing in Postman, demonstrating how you can build a robust test suite for your APIs. You'll author some post-response scripts, automate tests, and dynamically control workflows using the Collection Runner.
Instructions:
-
Login to your Postman account at go.postman.co
-
Go to the Test Bash workspace at go.pstmn.io/testbash-2024
-
Fork the âAPI Collaboration and Testingâ collection to your Workspace
-
Follow the instructions in the Collection documentation and let us know if you have any questions.
Key Takeaways:
-
Enhanced API collaboration capabilities for API producers and consumers.
-
Strategies to improve team productivity and reduce onboarding time.
-
Techniques for making APIs more discoverable and easier to work with.
Resources:
-
Postman Learning Center helps you learn more about specific Postman features.
-
The Postman Community Forum is a great place to connect with Postman's community and seek help when you need it.
-
We have tons of video resources on YouTube.
-
Learn about our latest feature releases in this blog post by Postmanâs CEO, Abhinav Asthana.
-
Weâre always listening! If you want to request a new feature or report a bug, please submit an issue here. Our Community Forum remains the best place to share general feedback with us.
-
Xray
With Xray, managers can enhance Agile boards by tracking requirement status and test execution progress in real-time. It can also generate advanced reporting that can be exported to docx, xlsx or pdf using Xporter.
Sauce Labs
Keysight Technologies
Postman
Al Goodall
Quality ManagerGianni Au
Senior SDETScout Burghardt
QA LeadGabbi Trotter
Software Testing RecruiterChristopher Chant
Business Growth & Agility GuideKirsten Reher
Senior Product EngineerHeather C
Quality Assurance LeadLouise Gibbs
Senior Automation TesterNurseda Balcioglu
Quality EngineerNicola Lindgren
Quality Engineering ManagerTina Gohil
Senior QA ConsultantTeresa Reynolds
Staff Test EngineerBen Dowen
Senior Quality EngineerEmily O'Connor
Principal Test Engineer, ConsultancyNikoleta Koumpouzi
QA EngineerAntonella Scaravilli
TesterTitus Fortner
Sr Developer AdvocateBarry Ehigiator
Software Test EngineerMelissa Fisher
Quality Engineering ManagerEirini Kefala
Test ManagerChristine Pinto
Automation EngineerLina Deatherage
Technical AnalystBill Matthews
Bill Matthews has been a freelance test consultant for over 20 years working mainly on complex integration and migration as a Test Architect and as a Technical Lead. He champions the use of modern and effective approaches to development and testing.
He is a regular contributor to the testing community at both local and international levels through conference speaking, coaching/mentoring and delivering workshops and training focusing on automation, performance, reliability, security testing and more recently artificial intelligence.
Ashutosh Mishra
TesterKat Obring
Founder, DirectorNatalia Petrovskaia
Engineering ManagerBastian Knerr
Teamlead TestingJarsto van Santen
Specialist Test EngineerJarsto has been playing with code for a quarter of a century now, having started at 11 years old. Despite studying law he eventually saw sense and turned his hobby into a career. He now works as a Specialist Test Engineer at DUO (part of the Netherlands' ministry of Education).
At work he is active in the Automation Serviceteam - developing internal tools, helping and teaching colleagues, and looking critically at what should and should not be automated - as well as taking on testing-related matters that affect the entire organization, coordinating with specialists from other DevOps disciplines, and advising management as needed.
In his free time Jarsto is mildly obsessed with world history, science fiction and fantasy, and all sorts of technology he doesn't get to play around with at work (yet). Somehow he never has the time to watch as quite many movies/series, read quite as many books, or play quite as many games as he thinks he should.
Dr. Niels Heller
Machine Learning and Data EngineerEmna Ayadi
Quality CoachRahul Parwal
SpecialistParveen Khan
Senior QA Consultant @ ThoughtworksMateusz Adamczak
Software EngineerVernon Richards
Quality Coach / Senior Quality EngineerIâm a Quality Coach & Tester that helps orgs, teams & individuals understand the relationship between quality & testing to help them build better products & deliver more effective services.
Iâve been testing since 2002 starting with video games on PS2, Xbox & PC. It may not sound like a real job but itâs the truth!
By day Iâm a Senior Quality Engineer at a health tech night I run Abode of Quality. Here is one of my core values:
âThe problem is not the problem. The problem is your attitude (& how youâre thinking) about the problem.â - Captain Jack Sparrow (the part in brackets was my addition though!)
I believe that quality & testing problems are actually people problems in disguise! Often the cause of these problems is misaligned goals, different perspectives, low empathy for colleagues, and the list goes on.
Using my coaching skills in the quality & testing space, I believe the most effective way to serve the business, is to help teams & individuals gain new perspectives about themselves & their teammates.
Natalia Petrovskaia
Engineering ManagerRyan Lobo
Network & Security SpecialistJenna Charlton
Developer Advocate at QaseScott Hackeson
Test Architect & Principal EngineerKaty Bradshaw
Test & Assurance Capability LeadJack Harris
Customer EngineerKaren Tests Stuff
Content CreatorBill Matthews
Bill Matthews has been a freelance test consultant for over 20 years working mainly on complex integration and migration as a Test Architect and as a Technical Lead. He champions the use of modern and effective approaches to development and testing.
He is a regular contributor to the testing community at both local and international levels through conference speaking, coaching/mentoring and delivering workshops and training focusing on automation, performance, reliability, security testing and more recently artificial intelligence.
Michal Pilarski
Data Architect, ETL Tester, GIS EngineerCan I attend for one day only?
Yes! We've recently introduced daily tickets for those who can't attend both days. Just head over to the ticketing page and select the day you prefer.
I have an Unlimited Membership, how can I get my ticket?
Get in touch with one of our team members using the chatbot or emailing testbash@ministryoftesting.com and we will send you a ticket link.
Are there any discounts?
Yes! There are several ways you can get discounts:
- All Professional Members enjoy an incredible 50% DISCOUNT on tickets. To access your discount, simply log in with your Professional account, click the green 'Buy Ticket' button, and the discounted prices will be displayed. Please note that tickets purchased with the Professional 50% discount are exclusively for use by the Professional Members and are non-transferable.
- Group discounts available for non-Professional Teams:
- 5 to 10 tickets: 5% Discount
- 10 to 15 tickets: 10% Discount
- 15+ tickets: 20% Discount
Will my Call for Contribution be reviewed by the community?
The MoT Team and a few selected TestBash Ambassadors have reviewed all submissions.
Where in Brighton will TestBash take place?
We will be returning to the stunning and newly renovated Brighton Dome.
Where can I stay overnight?
We have secured fixed rates at a few hotels in Brighton you can call to book your overnight accommodation:
Queens Hotel Brighton - £145 (11th & 12th) & £175 (13th)  B&B
- Distance: 0.4 miles / 9 minutes walking
- Available until the 13th of August 2024
- 10 rooms available
-
Instructions: Full payment, non-refundable within 7 days of booking. To book call reservations: +44 1273 321 222 and quote: Ministry of Testing
Bookings can only be made Tuesday to Friday 9.00 am to 5.00 pm (UK Time)
Staybridge Suites - £180 B&B
- Distance: 0.5 miles / 11 minutes walking
- Available until the 28th of August 2024
- 30 rooms available
- Instructions: Full payment at the time of booking. To book call reservations: +44 1273 468 805 option 1 and quote: MOT
Old Ship Hotel - £180 B&B
- Distance: 0.4 miles / 9 minutes walking
- Available until the 11th of June 2024
- 30 rooms available
- Instructions: Full payment at the time of booking, non-refundable. To book call reservations: +44 1273 329 001 and quote: MINI110924
Holiday Inn Brighton Seafront - £189.00 B&B
- Distance: 0.9 miles / 20 minutes walking
- Available until the 31st of July 2024
- 20 rooms available
- Instructions: Visit their website, enter dates, search and finally the Group Code BE1 (at the top of the search options). Alternatively, contact the central reservations team at +44 333 320 9324 (option 1) and quote the group code BE1.
Mercure Brighton Seafront - £189.00 B&B
- Distance: 1 mile / 23 minutes walking
- Available until the 13th of August 2024
- 10 rooms available
- Instructions: Full payment at the time of booking. To book call reservations: +44 1273 351 012Â (option 1) and quote: Ministry of Testing - TestBash
Leonardo Brighton Station - £230 B&B
- Distance: 0.7 miles / 15 minutes walking
- Available until the 13th of August 2024
- 30 rooms available
- Instructions: Visit their website, input âLeonardo Brighton Stationâ & the date range, and add the promotion code: LHMINI110924
These were the only local hotels that agreed to give us a fixed rate, but there are also other hotels in the area you can consider having a look at, such as:
- Travelodge Brighton Seafront
- ibis Brighton City Centre - Station
- Premier City Centre (North Street)
- And more...
Is the venue fully accessible?
The Brighton Dome's venues are fully accessible: the building is equipped with lifts to every floor and wide doorways, allowing for easy entry and exit. Additionally, the conference rooms are designed to provide ample space for manoeuvring, with seating areas that are adjustable and easily customizable.
We take pride in making sure that all of our attendees can enjoy the conference in comfort, regardless of any accessibility requirements.
How are the Contributors chosen?
The MoT Team together with selected TestBash Ambassadors have reviewed all submissions.