The Startups Roundtable series hosted by C2C and Google Cloud Startups continued on Tuesday, Jan. 25 with another session on AI and ML, this one devoted solely to technical questions. These roundtable discussions are designed for startup founders seeking technical and business support as they realize their visions for their products on the Google Cloud Platform. This time, 10 Googlers including 6 Customer Engineers led private discussions in small groups of over forty guests from the C2C community. Watch the introduction to the event below:
As in the previous Startups Roundtable, after the introduction, the hosts assigned the attendees to breakout rooms where they could ask their questions freely with the attention of the Google staff on the call. The breakout rooms in these sessions are not recorded, but C2C Community Manager Alfons Muñoz (
According to Muñoz, after the time allotted for the discussions in the breakout rooms ended, the conversations kept going. Guests had more questions to ask and more answers to hear from the Google team. The hosts invited all attendees to bring their questions to the C2C platform for the Googlers to answer after the event. Two guests took them up on the offer, and Reddy wrote them both back with detailed advice.
Markus Koy (
I am using the word-level confidence feature of the Speech-to-Text API in my app (POC) https://thefluent.me that helps users improve their pronunciation skills.
Is there an ETA when this feature will be rolled-out for production applications and if so, for which languages?
and Reddy wrote back:
It was great chatting with you!!
The Product team is aiming for Word Level Confidence General Availability stage (GA) by end of Q2 2022. Regarding languages supported, currently it supports English, French and Portuguese and that being said, multiple languages will be supported as we rollout the support for other languages in phases.
Please stay tuned and checkout announcements here- https://cloud.google.com/speech-to-text/docs/languages.
The next day, Erin Karam (
We are looking for guidance with training our DialogFlow CX intent. Our model is limited by the 2000 limit on training phrases for a single intent.
Our use case is that we are attempting to recognize symptoms from the user. We have 26 different symptoms we are trying to recognize. We have 10s of thousands of rows of training data to train for these 26 symptoms. The upper limit of 2000 is hampering our end performance. Please advise.
and Reddy responded:
Thanks for joining today’s session!!
Default limit is 2000 training phrases per intent. This amount should be enough to describe all possible language variations. Having more phrases may make the agent performance slower. You can try to filter out identical phrases or phrases with identical structure.
You don't have to define every possible example, because Dialogflow's built-in machine learning expands on your list with other, similar phrases.However, create at least 10 to 20 training phrases so your agent can recognize a variety of end user expressions.
Some of the best practices i would suggest is,
- Avoid using similar training phrases in different intents.
- Avoid Special characters.
- Do not ignore agent validation.
Let me know if that works.
A startup is a journey, and no startup founder will be able to get all the answers they need in one session. That’s why the Startups Roundtable series is ongoing; more business and technical roundtables will be coming soon. For now, if you are a startup founder looking for more opportunities to learn from the Google Startups Team and connect with other startup founders in the C2C community, register for these events for our startups group: