Artificial Intelligence (AI) is one of the most well-known technological developments in recent history. The possibilities of AI seem endless. From self-driving cars, dictation tools, translation apps, predictive analytics and application tracking, retail tools such as smart shelves and shopping carts, to apps that help people with disabilities, AI has become a powerful component of great high-tech products and services. Get it. However, it can also be used for malicious purposes, and the ethical considerations surrounding the use of AI are still in its infancy.
In their book, Tools and Weapons, the author talks about the need for ethics, and for good reason. Many AI services and products are under some scrutiny because they are detrimental to a particular population. B. Through racial and gender-specific prejudices, or through false predictions.
AI-powered voice technology has made it possible for anyone to clone voice. That’s exactly what happened to Bill Gates, whose voice was duplicated by a Facebook engineer-perhaps without his consent. Voice clones are already being used for fraud. In 2019, the scammers duplicated the CEO’s voice and succeeded in tricking the CEO into sending large sums of money. Similar crimes are occurring using the same technique.
AI technology isn’t just about voice cloning. The combination of audio duplication and video caused what is called deepfake. With the help of software, anyone can create compelling, often difficult-to-authenticate images and videos of someone else. This has plagued cybersecurity professionals. The technology is open source, available to anyone with skill and imagination, and is still largely unregulated, making it easy to use for malicious purposes. Similar to the
Bill Gates voice clone demonstration, a deep tampering with Belgian Prime Minister Sophie Wilmes talking about COVID 19 has been announced by a political group. One area of potential harm associated with deepfake is the dissemination of false information. Another problem is that it can affect the opinions of the general public who may trust and respect public figures. Also, cloned people can lose their reputation, which can lead to loss of income, opportunity, and psychological damage.
A recent article on raising awareness of Deepfake’s LinkedIn profile talked about Deepfake accounts that have succeeded in generating hundreds of LinkedIn connections. This is not surprising, as cybersecurity professionals often trust each other when it comes to security recommendations. Once one of these fake accounts are accepted as a LinkedIn connection, the information on the account could be used by any malicious actor to conduct research about the person to commit fraud.
When an account is added by some cybersecurity professionals, “social proof” gives confidence to the new connection, making it easier for fraudulent accounts to connect with similar people. Thus, future phishing attempts can be more successful because they mimic real life and look harmless. New LinkedIn connections with common interests, skills, or expertise may be seeking recommendations. However, these recommendations could allow a malicious attacker to gain valuable security insights that can be used against an organization.
As humans, we are sociable to trust and help our friends, family, colleagues and acquaintances. This is part of our social norms and helps us succeed in life. Scammers know this and take advantage of it by organizing scams that abuse social norms. Deepfake makes your work much easier.
These new advanced cyber attacks are worrisome because what we see and hear is usually accepted as evidence. Even deepfake detectors can be easily bypassed if you know how to do it, so it’s very difficult for most people to distinguish between deepfake and real audio or images. In the past, cybercriminals have always been hidden people and eagerly avoided real human touchpoints. Even with telephone scams (voice phishing), the scammers may not have been able to easily increase their credibility because they were strangers. But with the help of Deepfake, scammers seem to come from friends and colleagues, someone we know and trust, and organize social engineering attacks that don’t have to doubt their motives. can. Organizing high quality deepfake is not cheap and requires some skill, which is exactly why the use of this technology seems to be increasing. More sophisticated scams usually bring great benefits to cybercriminals, so the return on initial investment can be quite high.
We need to wonder what deepfake abuse means to society. As humans, we act according to social and cultural norms. At work, you are expected to work with your colleagues to help. But how will our norms and expected behavior change in the face of this new threat? Suddenly, like with strangers, all social interactions need to be handled carefully.
If this increased vigilance to detect fraud becomes the norm, how does this hinder collaboration, productivity and friendship between the workplace and friends? Such techniques used in scams have become even more mainstream, and carefully organized scams such as fake numbers of family members and deepfake voices have become afraid because he is not sure if he is genuine. What if it happens? Will this change the way we interact and connect with others? How do you trust? Maybe only time will tell you.
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |