Skip to main content

Metrics to Access MS Teams User Experience

How Microsoft Defines Quality

Microsoft defines quality as a combination of service metrics and user experience.

This aligns with the famous quote by Peter Drucker:

"You can't manage what you can't measure."

Therefore, measuring both service health and user sentiment is crucial for delivering exceptional quality within Microsoft Teams.

This reinforces the importance of the metrics discussed throughout this document, as they provide the necessary data to objectively assess service performance and identify areas for improvement that impact user experience.


Service Metrics

Client-Based Metrics

To proactively identify and address potential issues impacting user experience, Microsoft Teams offers a comprehensive set of client-based metrics collected for each call. These metrics provide insights into the technical health of Teams sessions.

Key examples include:

  • Telemetry: Metrics like jitter, packet loss, and round-trip time (RTT) measuring network performance and its impact on audio, video, and screen sharing quality.
  • Reliability: Dropped calls and multiple attempts to join calls can indicate call stability problems.
  • Endpoint: Information about used headsets and whether UDP or TCP transport protocol is chosen, which helps diagnose endpoint-specific issues.
  • Client: Monitoring client updates and VPN split tunneling functionality ensures optimal client performance and connectivity.

User Experience (UX)

While service metrics provide objective data, user experience (UX) in Teams is subjective.
Users may perceive issues even when the network and service function normally.
This subjectivity can make troubleshooting complex, which is why correlating service metrics with user experience is crucial.

Examples:

  • Meeting/Call Access: Monitoring successful joins to pinpoint permission-related problems.
  • Audio Quality: User feedback during calls can indicate issues with network conditions or audio devices.
  • Video Quality: Blurry or choppy video may stem from insufficient bandwidth, congestion, or client hardware limitations.
  • Screen Sharing: Monitoring screen-sharing performance ensures smooth meeting collaboration.

1. Latency — One-Way, Ping, and Round-Trip

Definition:
Latency measures the time it takes for a packet to travel from the user's device (point A) to the Microsoft Teams service (point B) and back. It depends on distance, speed, and routing delays.

Impact:
High latency creates unnatural pauses and overlapping speech, breaking communication flow—similar to using a satellite phone.


2. Packet Loss Rate

Definition:
Indicates the percentage of packets that fail to reach their destination within a set timeframe (e.g., 15 seconds).

Example: If 1000 packets are sent and 50 are lost → 5% packet loss.

Impact:
High packet loss causes dropped audio, silence, or robotic voice effects.

Microsoft recommends keeping packet loss below 1% for optimal performance.

Packet LossCall Quality
Less than 3%Good quality
3%–7%Noticeable degradation
Over 7%Severe impact

3. Packet Reorder Ratio

Definition:
The proportion of packets that arrive out of order compared to how they were sent.

High network traffic increases reordering probability.

Impact:
Out-of-order packets can be misinterpreted as loss or congestion.
This causes delays and distorted audio.
A common alert threshold: 0.05%.


4. Jitter (Packet Inter-Arrival Time)

Definition:
Jitter measures variation in the arrival time of packets. Ideally, packets arrive at regular intervals.

When they do not, a buffer stores packets temporarily for proper playback.

Impact:

  • Low jitter: Smooth conversation.
  • High jitter: Audio distortion; voices may speed up or slow down due to congestion.

5. Ratio Concealed Sample Average (Healed Percentage)

Definition (Mitigating Packet Loss):
When packets are lost, audio samples can be “healed” to maintain continuity.

Impact:

  • A high healed percentage means audio loss correction was frequently needed → poor audio quality.
  • Recommended ratio: ≤ 2%. Beyond that, quality degrades quickly.

6. Estimated Bandwidth — Minimum, Maximum, and Average

While Teams can function under varying conditions, bandwidth directly impacts quality.

Below 100 Kbps, noticeable audio degradation occurs. Video is more susceptible to low bandwidth or packet loss.

Guidelines:

  • Refer to Microsoft’s official documentation for bandwidth requirements by call type and video quality.
  • Implement continuous bandwidth monitoring to maintain consistency.

Optimizing Networks for Quality and Reliability

Eight key configuration settings ensure reliable Teams performance and consistent user experience:

  1. Quality of Service (QoS): Prioritize media traffic on congested networks.
  2. M365 Traffic Bypassing Proxy: Skip web proxies to reduce latency.
  3. Split Tunneling (VPN): Create direct Microsoft Cloud routes for VPN users.
  4. Open Ports and Protocols: Ensure required signaling and media ports are accessible.
  5. Microsoft Certified Devices: Use certified peripherals for stability.
  6. Local DNS Resolution: Use geo-DNS for efficient routing.
  7. Shortest Path to Internet: Route to the Internet as close as possible to the endpoint.
  8. Antivirus/DLP Exclusions: Exclude Teams processes to avoid scanning delays.

Bandwidth Considerations and User Experience

Bandwidth and Call Quality:
Teams dynamically adapts to available bandwidth. If bandwidth is ample, Teams delivers up to 1080p resolution at 30 FPS.

Variable Bandwidth Impact:
Network limits and high latency can cause lag, reducing user satisfaction.

Recommendation:
Follow Microsoft’s bandwidth guidance based on resolution and frame rate needs.


Assessing User Experience — Rate My Call

Feature Overview:
Rate My Call is a built-in feature prompting users to rate call experiences (every 10 calls or 10% of total calls).

Rating Scale:

  • 1–2: Poor
  • 3–4: Good
  • 5: Excellent

Limitations:

  • Not collected after every call.
  • Users often skip surveys, biasing results toward poor feedback.
  • Should be used with service metrics correlation for accurate analysis.