Like many institutions, we have manual processes that aim to reduce the causes of, and deal with the consequences of, low-quality lecture captures. However, these process are very resource intensive so many issues slip through the net, resulting in too many students being presented with low-quality captures. We would like the EchoVideo platform to enhance our ability to detect poor-quality captures so we can take action to mitigate the poor student experience resulting from students viewing these bad captures. Methods for detecting poor-quality captures could include: * an empty transcript, suggesting no audio was captured * AI analysis of audio files to detect common quality issues, such as low volume or high levels of background noise * AI analysis of video output to detect common quality issues, such as no change in the frames throughout the duration of the capture By providing a high-level of granularity of potential quality issues, customers could choose which ones are most relevant in their context and build reports and automations based on this. We would also like the data to be accessible programmatically, i.e., via the API, so it can drive automated support processes; and to be provided as early as possible to allow timely interventions. The implementation could be done iteratively: start with the easy ones such as flagging empty transcripts, and develop the more sophisticated detection tools, such as ML algorithms, over time. I'm sure customers would be willing to provide training data to train Machine Learning algorithms.