TOEFL Speaking (for the AI Era)
Get the inside track on all things TOEFL® Speaking—from expert breakdowns of the test’s scoring rubrics to cutting-edge research on how AI like SpeechRater™ evaluates your performance.
Whether it's leveraging movie-based learning techniques or diving into the psychology behind language assessment, each episode gives you a front-row seat to the latest strategies, tips, and tools to help you master the Speaking section.
We don’t just stop at exam prep. We explore the bigger picture of how the TOEFL shapes language learning, how automated scoring impacts your results, and what really goes on behind the scenes at ETS. If you want to understand the nuances of TOEFL Speaking and learn how to make your test performance stand out, this podcast is for you.
This podcast is made possible through a blend of innovative AI solutions, including NotebookLM, ElevenLabs, ChatGPT, Suno, and Buzzsprout.
Visit My Speaking Score: https://www.myspeakingscore.com/
TOEFL Speaking (for the AI Era)
Does Your Accent Impact Your TOEFL Speaking Score?
In this episode, we explore the fascinating research on accent familiarity bias and its potential impact on your TOEFL Speaking score. Learn how our brains process unfamiliar accents, how nonverbal cues can improve comprehension, and why audio-only assessments may pose unique challenges for test-takers.
We also dive into the role of AI tools like MySpeakingScore, powered by SpeechRater, in leveling the playing field for students worldwide. Discover how personalized feedback can refine your pronunciation, boost your confidence, and help you excel on test day—without erasing your unique voice.
Join us as we uncover how technology and research are reshaping language assessments to make them more inclusive and equitable for all learners.
Source
Free TOEFL Speaking practice:
Free resources:
Social:
My Speaking Score serves 000s of users across the globe by helping them data-power their TOEFL Speaking prep.
(0:02 - 0:18)
It seems like seeing things like facial expressions, lip movements, and body language helps the raters understand the speakers better, even if their accents were unfamiliar. It's almost like those visual cues gave them extra information to help their brains figure out what was being said.
That's so cool.
(0:21 - 0:48)
This is the TOEFL Speaking Prep Podcast for the AI era. All right, so let's dive into some research here about accent familiarity and how it can really affect speaking tests, especially those really important high-stakes tests like the TOEFL and IELTS. I think we're going to uncover some pretty surprising stuff here about how things like your accent and even how the test is structured—like the format of the test—can actually influence what score you get.
(0:48 - 0:58)
And a lot of this has to do with pronunciation.
Yeah, it's super interesting. The research is really showing that it's not like you have a "good" accent or "bad" accent.
(0:58 - 1:27)
It really dives into how our brains are wired, how they process sounds. You know, familiar sounds are easier to process, and that can have a big impact on how your speaking skills are evaluated.
Okay, so let me see if I got this straight. You're saying that even if your English is perfectly clear, someone grading your speaking test might unconsciously give you a lower score just because they aren't used to your accent?
That's exactly what the research is indicating. This is what we call accent familiarity bias.
(1:28 - 1:49)
Think about it. When you hear an accent you've never heard before, it takes more effort to really understand. Now, imagine that in a test environment with all that pressure, where every single point counts.
Yeah, that makes a lot of sense. It's like your brain has to work a lot harder to decode what's being said, and that could totally affect how a rater perceives your pronunciation skills.
(1:50 - 2:19)
And this is especially relevant to tests like the TOEFL, which are used by universities worldwide to evaluate how well non-native speakers can communicate in English.
Absolutely. Especially the TOEFL iBT, which is graded using audio only. The people grading—the raters—don’t have any visual cues to help them understand the speaker. All those subtle pronunciation nuances that might be easier to figure out with visual cues become so much more important when you're only listening to audio.
(2:19 - 2:36)
So someone with an accent that's unfamiliar to the raters could really be at a disadvantage, right?
Exactly. That seems like a huge obstacle for students, especially those trying to get into top schools where TOEFL scores are so important.
It is.
(2:36 - 2:52)
It's definitely a big challenge, and it highlights the global nature of the TOEFL and the pressure students face trying to get the scores they need. Imagine you're a student in a remote village, like in rural China. You're not only mastering grammar and vocabulary but also trying to pronounce things in a way that people from completely different language backgrounds will understand.
(2:52 - 3:15)
Yeah, it's like they're dealing with an extra layer of difficulty on top of an already challenging exam. For many students, getting a high TOEFL score is the key to getting into a good university. It can change their whole lives.
Exactly. That's why this research is so important. It shows us that potential bias could be holding students back.
(3:16 - 3:34)
But the good news is, there are tools and technologies available to tackle this challenge and make things fairer for students, no matter their accent.
That's great. What kind of tools are we talking about?
So, one example is MySpeakingScore. It's a TOEFL prep tool powered by AI.
(3:34 - 3:50)
It uses something called SpeechRater and has already been used by over 100,000 students. It's become a really valuable resource.
Oh yeah, I've heard of SpeechRater. It's really good at recognizing speech.
(3:50 - 4:00)
So MySpeakingScore basically uses AI to help students improve their pronunciation specifically for the TOEFL.
That's right.
(4:00 - 4:16)
The cool thing is it gives personalized feedback on pronunciation no matter what accent you have. It's not trying to eliminate your accent or make you sound like a native speaker. It's about helping you refine your pronunciation so you're clear and easy to understand, which really matters for TOEFL scoring.
(4:17 - 4:26)
That's a really smart approach. It addresses the real problem by focusing on clarity and making sure people can understand you, instead of forcing everyone to have the same pronunciation.
Exactly.
(4:27 - 4:41)
This is where AI is transforming education. It uses technology to give personalized feedback, making high-quality pronunciation training available to anyone, anywhere. It doesn’t matter where you are or what resources you have.
(4:41 - 4:55)
It's like having a pronunciation coach available to you 24-7.
Wow, that's incredible. It's like AI is breaking down all those barriers and giving students, even in remote areas, access to the same level of support that you might have only been able to find in big cities before.
(4:56 - 5:06)
Or by paying for expensive tutors, it's making education so much more accessible.
It really is. And the best part is the technology behind SpeechRater is getting better all the time.
(5:06 - 5:26)
It's been trained using this massive dataset of all different kinds of speech, and it can accurately assess all sorts of things about your pronunciation—like how fluent you are, your intonation, and even how you're producing individual sounds.
So it's not just picking up on your accent? It's actually looking at the specifics of how you're pronouncing things and giving you feedback on how to improve them.
Exactly.
(5:26 - 5:38)
And that's why it's so effective for students. They're not just getting a score; they're getting specific insights that help them fine-tune their pronunciation and feel more confident speaking.
This is all so fascinating.
(5:38 - 5:58)
I'm really starting to see how AI is playing this crucial role in overcoming accent familiarity bias and helping people succeed on tests like the TOEFL. It's like technology is stepping in to level the playing field for students all over the world.
That's one of the most exciting aspects of this research and the advancements we're seeing in AI.
(5:59 - 6:13)
It's creating opportunities for students who might have faced really difficult challenges because of their accent. And as this technology keeps developing, we can expect even more innovative solutions that help learners reach their full potential.
(6:13 - 6:30)
In the next part of our deep dive, we're going to get into some fascinating details about a study that specifically looked at accent familiarity bias. We'll see how the researchers set up the study, the specific accents they chose to look at, and what the results tell us about the impact of accent familiarity on scoring.
So stay tuned.
(6:30 - 6:45)
Okay, so picking up where we left off, let's get into that study we mentioned, the one that directly tested this accent familiarity bias. It has some really fascinating findings about how our perceptions of accents can totally change how we evaluate someone's speaking skills.
(6:46 - 6:54)
Okay. Yeah, I'm curious. How did they actually test for this bias?
Well, they got a group of over a hundred experienced raters who were mostly native English speakers from Australia.
(6:55 - 7:17)
And then they chose two accents that would probably sound pretty different to these raters: Brazilian-accented English and Papua New Guinean Tok Pisin-accented English.
Oh wow. Interesting choices. I can see how those accents would have all sorts of sounds unfamiliar to Australian English speakers. It's like they were trying to choose accents that would really push the boundaries of accent familiarity.
(7:17 - 7:27)
Exactly. That was a key part of their plan. They wanted to see if how familiar the raters were with these accents—or how unfamiliar—would change how they scored the speakers.
(7:27 - 7:48)
So they had these raters listen to speech samples from speakers with these accents, and then they had to rate how easy it was to understand them.
Okay, so did they just use audio recordings for that, or did they have any visuals too?
That's a great question. They actually used both audio-only and audiovisual formats, which makes the results even more interesting.
(7:49 - 8:06)
Remember how we talked about the TOEFL iBT being graded using only audio? Well, this study lets us directly compare how accent familiarity affects things both when you can only hear the person and when you can see and hear them.
I see what you mean. By having both formats, they could see if those visual cues actually made a difference.
(8:06 - 8:19)
Exactly. And the results were really revealing. When they only used audio, there was a clear connection between how familiar the raters were with an accent and how they scored the speakers’ comprehensibility.
(8:19 - 8:32)
Meaning the more familiar the accent was to them, the higher they scored the speaker?
Yeah, pretty much. So even though everyone was speaking English, the raters thought some people were easier to understand simply because they were used to that accent.
(8:32 - 8:44)
Yeah, that's what the data is telling us. But here's the really interesting part. When the raters could both see and hear the speaker—so in the audiovisual condition—that accent familiarity thing basically went away.
(8:44 - 9:01)
Wow, hold on. So are you saying that just by adding a video, the raters weren't as affected by an unfamiliar accent?
Yep, that's right. It seems like seeing things like facial expressions, lip movements, and body language help the raters understand the speakers better, even if their accents were unfamiliar.
(9:01 - 9:14)
It's almost like those visual cues gave them extra information to help their brains figure out what was being said.
That's so cool. It really shows how much we use nonverbal communication to understand each other, especially when there are different accents involved.
(9:15 - 9:23)
Like our brains are wired to use every single clue we can get to make sense of what we're hearing.
Exactly. And this finding is pretty important.
(9:23 - 9:52)
When we think about speaking tests and things like that, if visuals can actually help reduce this accent familiarity bias, it makes you wonder about how fair tests are when they're only using audio.
That's a really good point. If we really want to know how well someone can speak, shouldn't we be using everything we can to make sure our evaluation is fair and accurate? Especially since these evaluations can have such a big impact on students getting into university or getting a job.
I totally agree.
(9:53 - 10:11)
And while it might be hard to add video to big tests like the TOEFL, this research definitely shows us that we need to keep exploring new ways to do things. It's like we're at this point where technology is advancing, and we're learning more about how the brain understands language. It’s making us rethink all these old ways of doing language assessment.
That's a great way to put it.
(10:11 - 10:39)
Even though we don’t know exactly what the future holds for speaking tests, there are already efforts to address accent familiarity bias and ensure students are judged on their actual speaking skills.
Speaking of which, let’s go back to MySpeakingScore for a minute and how it fits into all of this.
In the first part of our deep dive, we talked about how AI tools like that are using technology to give personalized pronunciation feedback, no matter what accent you have.
(10:39 - 10:56)
So, knowing what we know now about visual cues, how does that change how we think about developing and using tools like that?
That’s a really good question. It leads us to think about how AI and visual cues can work together to create even better language learning experiences that are fair to everyone.
(10:57 - 11:10)
I’m definitely interested to hear more. It sounds like there are some exciting possibilities for how technology can support language learners worldwide.
But I guess we’ll have to save that conversation for the final part of our deep dive.
(11:10 - 11:19)
All right, let’s wrap up our deep dive here. We’ve learned so much about accent familiarity, bias, and how AI is shaping the future of language learning.
(11:19 - 11:27)
It seems like we’re really on the verge of some pretty big changes.
Yeah, definitely. And, like with any new technology, there’s a ton of potential, but we also have to think about how to use it responsibly.
(11:27 - 11:35)
Okay. So let’s talk about the good stuff first. We’ve already discussed how AI tools like MySpeakingScore give personalized feedback on pronunciation, regardless of accent.
(11:36 - 11:57)
But how do you see that evolving in the future?
Well, imagine AI that doesn’t just assess your pronunciation but creates customized exercises and learning plans tailored to your specific needs and accent. It’s like having your own personal language program.
That’s so cool. It’s like AI is becoming a personal tutor, guiding you through every step.
(11:57 - 12:05)
It would be huge for learners who don’t have access to regular tutoring or struggle with specific aspects of pronunciation.
Exactly.
(12:05 - 12:28)
And with all the advances in VR and AR—virtual reality and augmented reality—we might even see immersive language learning environments. Imagine AI coaching you through simulations of real-world situations, helping you practice speaking in a safe and fun way.
(12:28 - 12:44)
Like practicing ordering food in a café in France or giving a presentation in English to a virtual audience and getting personalized feedback from your AI tutor?
Exactly! It’s like stepping into a different world and practicing your language skills without fear of making mistakes.
(12:45 - 12:59)
Right. And for students preparing for tests like the TOEFL, AI could even simulate the test environment, providing feedback and teaching strategies to help them feel more confident and comfortable on test day.
Yeah, it’s like taking away some of the pressure by letting you practice in a setting that feels real.
(13:00 - 13:12)
Exactly. And as AI gets even better, we can expect more advanced tools that personalize the learning experience, track your progress, and identify areas where you need extra help.
That’s really exciting.
(13:12 - 13:36)
But, as you said earlier, we also need to think about the potential downsides. What are some of the challenges or concerns we need to keep in mind as we use more AI in language learning?
One of the most important things is ethics—ensuring these AI tools are developed and used responsibly. We need to be really careful about biases hidden in the algorithms and the data used to train these systems.
(13:37 - 13:56)
So even though the goal is to make language learning more accessible, there’s a chance the AI itself could accidentally make things less fair.
That’s right. For example, if an AI system is trained mainly on data from one specific accent or dialect, it might penalize speakers with other accents, even if their pronunciation is clear and understandable.
(13:57 - 14:11)
That’s why it’s crucial to have linguists, speech scientists, and people from diverse cultures involved in developing and testing these AI tools.
So we can’t just assume AI will automatically be fair. We need to look at it critically and ensure it’s genuinely helping all learners.
(14:11 - 14:25)
Exactly. And another key point is that human interaction is still vital for learning a language. AI can provide feedback and guidance, but it can’t replace the richness of real human communication.
(14:25 - 14:35)
Yeah, I agree. There’s something special about practicing with another person—having real-time conversations and learning about different cultures—that you just can’t get from a machine.
Right.
(14:35 - 14:52)
So as we explore all these new possibilities with AI, it’s important to strike a balance between technology and maintaining valuable human interactions.
It sounds like the future of language learning will be a collaboration between humans and machines.
Exactly.
(14:53 - 15:03)
Each bringing their own strengths to the table.
Well, this deep dive has been eye-opening. I’ve learned so much about accent familiarity, bias, and how AI is shaping the future of language learning.
(15:03 - 15:19)
It’s clear that language is a powerful tool for connecting with people and understanding each other. By using technology thoughtfully, we can create a world where everyone communicates confidently and effectively, regardless of their accent or background.