As the race for the dominance of the self-driving car market intensifies, Tesla’s claim over the safety of its self-driving feature has come under scrutiny. The company has been accused of presenting manipulated footage as evidence in support of Elon Musk’s claim that the Tesla cars have high safety records when on auto-pilot during a recent presentation. Tesla lawyers have responded to the accusations by suggesting that the footage could be Deepfakes; manipulated videos that utilize artificial intelligence and machine learning to replace the original footage with a new one. Deepfakes have become a popular tool for producing fake videos, images or audios that are difficult to distinguish from genuine ones. The concerns around Tesla’s self-driving feature are due to an increase in fatal accidents involving self-driving cars across the industry. The controversy highlights the current limitations of artificial intelligence and machine learning when it comes to distinguishing between authentic and manipulated media. As self-driving cars become more prevalent, it is crucial to address the potential risks and ensure that safety standards are maintained at all times.
Quick Links