EchoVideo

Need help using EchoVideo? Email support@echo360.com
Locked Overlay Toggle as a Fix to Zooming Media Player
Hi Echo, Just adding a new Canny to continue the discussion commenced in https://echo360.canny.io/echo360-feature-requests/p/new-video-player-zooming-in-and-out which is marked as closed. Whilst we appreciate the implementation of the Lock Overlays Toggle as a quick fix for the zooming media player, we see it more as an interim workaround to what is really an accessibility issue. The Lock Overlays Toggle reverts the student experience to one that we all agreed was poor last year when the new media player introduced the black padding. We use embedded media players almost exclusively, as our learning designers recommend it. This approach allows students to view recordings within the context of the rest of the learning content on the web page, without needing to navigate away via a link or open the recording in full screen in a different tab. We need our students to be able to see the recordings in the maximum available space on that web page, especially recordings that have a lot of fine detail such as anatomical demonstrations. The black padding makes this an issue. We have very little use of in-venue recordings now and almost no uploaded slide decks, so this argument is really irrelevant for us as an APAC region customer of Echo360. I suspect that we are not alone. The zooming media player does not support accessiblity in that predictability of a media player is considered critical to meeting WCAG guidelines and ensure an equitable experience for all users, particularly neurodivergent and low vision users. Our advice will need to be to use the toggle to revert the media player, but it's disappointing that we will need to make this call.
5
·

considering

Fix for VTT transcript sentence fragmentation
When the ASR system generates VTTs they contain numerous instances where sentences are fragmented across lines. Often the last word of a sentence appears on a separate line with a very short timestamp (often just milliseconds). It’s so short that when published to the caption track, those 1-line captions do not appear on the video. Even if they do appear on the video, having the last word of a sentence appear on it’s own is disorienting to read. This happens in videos of every audio quality, be that high production value or live lecture. I’d like to request a feature that addresses the sentence fragmentation issue in VTT files. Specifically, a post-processing script or built-in functionality that can automatically merge these short, fragmented segments with the preceding segment. If possible this script should ideally be able to define a minimum segment duration. Segments shorter than the threshold (300 milliseconds for example) should be considered candidates for merging. In tandem with that, the script should be able to examine the text content of short segments. If a segment contains a single word with punctuation, it should be merged. When segments are merged, the end timestamp of the combined segment should become the end timestamp of the original last segment. This script could be a toggle in the transcript editor and generate a new “version” of the transcript, or it can be on by default. Here's an example of the end of a sentence breaking onto one line: ------- 00:45:56.979 --> 00:46:00.969 So usually following major extinction events, you have increases of 00:46:00.969 --> 00:46:01.118 diversity. ------ And here’s an example how the script should make this timestamp look: ------ 00:45:56.979 --> 00:46:01.118 So usually following major extinction events, you have increases of diversity. ------
1
·

considering

Load More