Coforge delivers speech processing and flagging harmful content at scale
Delivers Speech Processing & Flagging Harmful Content at Scale
Automate manual and labour intensive process of reviewing content played on TV and Radio using AI (Computer Vision, Speech and Text). Enable content moderation teams to be more proactive in detecting breach of regulations rather than working when a customer complaint is received against a service provider.
Challenges in Current Process
Manual effort in reviewing Video and Audio content for locating/flagging harmful content.
Providing High accuracy would need multiple manual review.
Reviewer must have Read, Write and Speaking skills with multiple languages.
Reviewer should watch entire Video or listen entire Audio which is time consuming process.
Speech Analysis is itself a research area, Coforge in-house accelerator uses multiple cognitive services to pre-process video and audio data(URDU, Hindi and other non-English languages) for detection of Harmful words. It will reduce manual effort required for analysing Video and Audio content in the country for 40+ TV, 5+ radio channels and On-demand content.
Coforge solution will automate manual and labour intensive process of Content moderation and detection. Real time detections of breach with very high accuracy helps in significantly reducing multiple hours of manual effort in reviewing Video and Audio content for locating/flagging harmful content in few minutes .
We created our achievement metrics based on True positives and False positives. Research resulted in minimum of 85% accurate keyword spotting with all the languages considered during the project. The accuracies measured were not on common languages like English but the focus was on Asian Languages like Urdu, Bengali and others.
This solution is in production at one of the European Telecom & Communication Regulator as an AI based Content Moderation Solution to detect breach of regulation by media channels.
Significantly reduced manual effort in reviewing.
24x7 running solution to detect harmful content
Automated generation of evidence and content report for the media analysed.
Transformed the process from being reactive to proactive.