This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl
This year in San Francisco, several elections, including the U.S. Presidential elections, will take place. On November 5th, 2024, the United States is scheduled to hold its presidential election. Joe Biden (re-election) from the Democratic Party and Republican Donald Trump are the front-runners in this campaign.
In the most recent AI news, given the magnitude of this event, many tech companies, including Google, are adopting a “reasonable precaution” to prevent AI tools from being used for misinformation. Specifically, Gemini, an AI tool developed by Google’s AI lab DeepMind, is a next-gen AI model. It’s a “multi-modal” tech that can work with more than words (video, etc.).
But how does AI affect democracy and elections in general? How does Google balance neutrality by managing this tech? In this post, we will examine this news and explore if current AI truly threatens democracy.
The rise and fame of generative AI has affected many industries, including healthcare, journalism, marketing, and many more. But it could also create a disruption in politics. You can ask chatbots how to navigate the world of legislation, figure out what laws can be passed, create a slogan for your campaign, and even create a specific campaign strategy.
AI can also be used as an educator; you can ask AI about tax policies and climate change issues, for example. Candidates can also use AI chatbots, which allow voters to directly engage with them on various issues. There’s a lot of possibilities here.
However, it can also be used to produce propaganda, potential disinformation, and misinformation that can threaten to disrupt democratic representation, undermine accountability, and damage political and social trust.
With generative AI technology, the signals sent by the balance of electronic communications around pressing policy issues may be severely misleading.
Specifically, technological advances now allow malicious actors to generate false “constituent sentiment” at scale by effortlessly creating unique messages taking positions on any side of a myriad of issues. Even with old technology, legislators have sometimes struggled to discern between human-written and machine-generated communications.
Recent technological advances allow for the creation of "deepfake" videos, where AI can create realistic video and audio forgeries. This could have serious implications during political elections, where one candidate could create fake videos depicting an opponent doing or saying things that never actually occurred.
This can significantly blur the lines between truth and fiction.
The emergence of AI-powered recommendation systems (known as ‘algorithms’) has also resulted in people receiving almost entirely information that aligns with their beliefs and interests, which can create “bubbles.”
These "bubbles" limit exposure to diverse opinions, leading to societal divisions. Democracy relies on the idea of the public sphere, where individuals from different backgrounds can come together to solve problems.
However, social media platforms, which play a big role in public discourse today, can make it harder for diverse opinions to be heard.
Given the level and availability of this technology, Google is restricting its AI, Gemini, from answering any election-related prompts.
The search giant has begun restricting queries made in Gemini when they pertain to elections in any market worldwide where elections are occurring. So, if you plan on using Google’s AI chatbot Gemini to inquire about an upcoming election, you will have to do so from a country where no elections are taking place.
Many elections will also be held outside the United States in large countries such as India, South Africa, and the UK’s London Assembly.
This update was initially announced in their blog post last December, and they made a similar announcement last February regarding the European parliamentary. The confirmation of this restriction earlier this month pertained to India’s upcoming national elections while assuring that this will be rolled out globally.
According to their post: “As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses.”
When we tried to ask politically related questions, such as “What is Joe Biden’s housing campaign?” or “Trump’s healthcare policies”, Gemini returned us this response: "I'm still learning how to answer this question. In the meantime, try Google Search."
However, when it came to election day, a Google search provided a straightforward answer: Tuesday, November 5, 2024.
After already facing major concerns about Deepfakes influencing Indian voters, India has asked many tech firms to seek government approval before any release of information using AI that may be deemed “unreliable” and to label them for the potential to return wrong answers. After facing some criticism from global venture capitalists and artificial intelligence start-up founders, the Indian government clarified that the limitation applied to "significant" technology companies and not to start-ups.
This restriction is a great move as AI chatbot algorithms pull in gigantic streams of data, some of which may be biased or even riddled with misinformation. They struggle to balance subjective answers, so it is best to keep this from entering a platform created to give answers that do not show their true sources.
While there are still many AI chatbots that can reply to current political topics, it's crucial to see that the avoidance of election commentary is implemented wherever possible. This way, Gemini and other AI tools are restricted from being politically biased and swaying readers.
When it comes to elections, it's important for people to have access to useful and relevant information that can help them navigate the process and make informed decisions. This includes collaboration with official government authorities to provide critical voting information via Google Search (e.g. how to register and how to vote).
Platforms such as YouTube can be a surprisingly reliable source of information. They have also been very strict on who can run ads on their platform. It needs to go through a very tight process, such as an identity verification process, provide a pre-certificate, and have in-ad disclosures that clearly show who paid for the ad.
In addition, they already have a long-standing policy of prohibiting promoting demonstrably false claims that could undermine trust or participation in elections. It is worth keeping in mind, however, that content creators on the platform are always generating videos based on individual opinions and research and should not automatically be taken as fact.
Google is also looking for ways to embed watermarks on each AI-generated image or video via Google DeepMind’s SynthID, to highlight when AI tools have been used to create the image or video.
It is all too easy to fall into confirmation bias. People often believe information that aligns with their political views without fact-checking it. With the rise of fake content, it's crucial to strike a balance between political nihilism and healthy skepticism.
A loss of accessible objective facts would lead to a breakdown in trust, which is necessary for a democratic society to function.
AI tools such as Gemini could help legislators identify false information and understand their constituents' concerns better, ultimately leading to better policies that reflect the will of the people. But there are still bigger risks.
Having said this, Google’s decision to steer away from political topics is a move that other tech groups, such as OpenAI, should consider following – there is no denying nowadays that bad actors are already misusing these tools to manipulate voters in ways that weaken the democratic process.
It is essential for tech innovators to prioritize the protection of democratic principles, ensuring that technology serves as a force for empowerment and education rather than for exploitation and division.
This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl
This year in San Francisco, several elections, including the U.S. Presidential elections, will take place. On November 5th, 2024, the United States is scheduled to hold its presidential election. Joe Biden (re-election) from the Democratic Party and Republican Donald Trump are the front-runners in this campaign.
In the most recent AI news, given the magnitude of this event, many tech companies, including Google, are adopting a “reasonable precaution” to prevent AI tools from being used for misinformation. Specifically, Gemini, an AI tool developed by Google’s AI lab DeepMind, is a next-gen AI model. It’s a “multi-modal” tech that can work with more than words (video, etc.).
But how does AI affect democracy and elections in general? How does Google balance neutrality by managing this tech? In this post, we will examine this news and explore if current AI truly threatens democracy.
The rise and fame of generative AI has affected many industries, including healthcare, journalism, marketing, and many more. But it could also create a disruption in politics. You can ask chatbots how to navigate the world of legislation, figure out what laws can be passed, create a slogan for your campaign, and even create a specific campaign strategy.
AI can also be used as an educator; you can ask AI about tax policies and climate change issues, for example. Candidates can also use AI chatbots, which allow voters to directly engage with them on various issues. There’s a lot of possibilities here.
However, it can also be used to produce propaganda, potential disinformation, and misinformation that can threaten to disrupt democratic representation, undermine accountability, and damage political and social trust.
With generative AI technology, the signals sent by the balance of electronic communications around pressing policy issues may be severely misleading.
Specifically, technological advances now allow malicious actors to generate false “constituent sentiment” at scale by effortlessly creating unique messages taking positions on any side of a myriad of issues. Even with old technology, legislators have sometimes struggled to discern between human-written and machine-generated communications.
Recent technological advances allow for the creation of "deepfake" videos, where AI can create realistic video and audio forgeries. This could have serious implications during political elections, where one candidate could create fake videos depicting an opponent doing or saying things that never actually occurred.
This can significantly blur the lines between truth and fiction.
The emergence of AI-powered recommendation systems (known as ‘algorithms’) has also resulted in people receiving almost entirely information that aligns with their beliefs and interests, which can create “bubbles.”
These "bubbles" limit exposure to diverse opinions, leading to societal divisions. Democracy relies on the idea of the public sphere, where individuals from different backgrounds can come together to solve problems.
However, social media platforms, which play a big role in public discourse today, can make it harder for diverse opinions to be heard.
Given the level and availability of this technology, Google is restricting its AI, Gemini, from answering any election-related prompts.
The search giant has begun restricting queries made in Gemini when they pertain to elections in any market worldwide where elections are occurring. So, if you plan on using Google’s AI chatbot Gemini to inquire about an upcoming election, you will have to do so from a country where no elections are taking place.
Many elections will also be held outside the United States in large countries such as India, South Africa, and the UK’s London Assembly.
This update was initially announced in their blog post last December, and they made a similar announcement last February regarding the European parliamentary. The confirmation of this restriction earlier this month pertained to India’s upcoming national elections while assuring that this will be rolled out globally.
According to their post: “As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses.”
When we tried to ask politically related questions, such as “What is Joe Biden’s housing campaign?” or “Trump’s healthcare policies”, Gemini returned us this response: "I'm still learning how to answer this question. In the meantime, try Google Search."
However, when it came to election day, a Google search provided a straightforward answer: Tuesday, November 5, 2024.
After already facing major concerns about Deepfakes influencing Indian voters, India has asked many tech firms to seek government approval before any release of information using AI that may be deemed “unreliable” and to label them for the potential to return wrong answers. After facing some criticism from global venture capitalists and artificial intelligence start-up founders, the Indian government clarified that the limitation applied to "significant" technology companies and not to start-ups.
This restriction is a great move as AI chatbot algorithms pull in gigantic streams of data, some of which may be biased or even riddled with misinformation. They struggle to balance subjective answers, so it is best to keep this from entering a platform created to give answers that do not show their true sources.
While there are still many AI chatbots that can reply to current political topics, it's crucial to see that the avoidance of election commentary is implemented wherever possible. This way, Gemini and other AI tools are restricted from being politically biased and swaying readers.
When it comes to elections, it's important for people to have access to useful and relevant information that can help them navigate the process and make informed decisions. This includes collaboration with official government authorities to provide critical voting information via Google Search (e.g. how to register and how to vote).
Platforms such as YouTube can be a surprisingly reliable source of information. They have also been very strict on who can run ads on their platform. It needs to go through a very tight process, such as an identity verification process, provide a pre-certificate, and have in-ad disclosures that clearly show who paid for the ad.
In addition, they already have a long-standing policy of prohibiting promoting demonstrably false claims that could undermine trust or participation in elections. It is worth keeping in mind, however, that content creators on the platform are always generating videos based on individual opinions and research and should not automatically be taken as fact.
Google is also looking for ways to embed watermarks on each AI-generated image or video via Google DeepMind’s SynthID, to highlight when AI tools have been used to create the image or video.
It is all too easy to fall into confirmation bias. People often believe information that aligns with their political views without fact-checking it. With the rise of fake content, it's crucial to strike a balance between political nihilism and healthy skepticism.
A loss of accessible objective facts would lead to a breakdown in trust, which is necessary for a democratic society to function.
AI tools such as Gemini could help legislators identify false information and understand their constituents' concerns better, ultimately leading to better policies that reflect the will of the people. But there are still bigger risks.
Having said this, Google’s decision to steer away from political topics is a move that other tech groups, such as OpenAI, should consider following – there is no denying nowadays that bad actors are already misusing these tools to manipulate voters in ways that weaken the democratic process.
It is essential for tech innovators to prioritize the protection of democratic principles, ensuring that technology serves as a force for empowerment and education rather than for exploitation and division.