top of page
Original-LogoOnly-Square-SMLL-Pixel-Tran
  • linkedin
  • twitter
  • Instagram
  • YouTube
  • facebook

Written by Michael Plis. Your go-to source for smart technology & cybersecurity insights for small business. 

  • Writer's pictureMichael Plis

Ai Regulation: Joe Biden meets with CEO's of top Ai companies to discuss safe Ai use expectations

Updated: 13 hours ago


ChatGPT chat screen
A lot of influential people believe that Ai needs more regulation.Image credit: Unsplash / Emiliano Vittoriosi

On Thursday May 4th 2023 US President Joe Biden has met with CEOs of top artificial intelligence (AI) companies, including Microsoft and Google, urging them to ensure their products are safe before they are deployed. Do you think countries are doing enough to regulate Ai?


What's happened

  1. At the 2 hour meeting, Biden and other officials discussed the importance of evaluating the safety of AI systems, protecting them from malicious attacks, and being transparent with policymakers about their AI systems.

  2. Vice-President Kamala Harris highlighted the potential for AI to improve lives, but also raised concerns about safety, privacy, and civil rights.

  3. The administration also announced a $208m investment from the National Science Foundation to launch seven new AI research institutes. (might be more now)

  4. Biden, who has used ChatGPT, told officials they must mitigate the current and potential risks AI posed to individuals, society, and national security.

  5. The administration was open to advancing new regulations and supporting new legislation for the technology, and leading AI developers, including Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability AI, will participate in a public evaluation of their AI systems.

  6. The Federal Trade Commission (FTC) and the Department of Justice's Civil Rights Division have also pledged to use their legal authorities to fight AI-related harm.


Why was the Biden meeting needed?


The white house in the USA
The white house meeting occurred May 4th 2023. Image Credit: Unsplash / René DeAnda

The Biden meeting with the tech leaders came amid growing concern about AI technology leading to privacy violations, skewing employment decisions, and powering scams and misinformation campaigns.


This year, "generative AI" has become a buzzword, with apps such as ChatGPT capturing the public's fancy, sparking a rush among companies to launch similar products they believe will change the nature of work.


Research and news events show that tech giants have been unsuccessful in their promises to combat propaganda around elections, fake news about COVID-19 vaccines, pornography and child exploitation, and hateful messaging targeting ethnic groups.


The big need for Ai Regulation


How it will all play out is the BIG question that we will have to observe in the coming months and years. There is a growing interest in Ai regulation these days due to the dangers. This growing interest in my opinion seems to come after a recent YouTube documentary "The Ai Dilemma" released by the Centre for Humane Technology setup in 2018 by Tristan Harris and Aza Raskin who are the makers of "Social Dilemma" released on Netflix about the secrets and dangers of social media which is run primarily by Ai algorithms.


The documentary "The AI Dilemma" suggests three guidelines to guide the development and deployment of AI systems. These guidelines include (1) transparency and explainability, (2) societal benefit, and (3) human oversight and control. The importance of understanding how AI systems make decisions is emphasized under the transparency and explainability rule.


I have been personally shocked and troubled by the documentary The Ai Dilemma and since the early days of public releases of generative ai late last year (2022) I have have some worrying thoughts about Ai Art generators such ass DALL-E and Stable Diffusion and MidJourney and now knowledge oracles such as ChatGPT, Microsoft Copilot, Google Bard and others. I'm a big fan and user of Ai for many years and I have an internal dilemma as to what types of Ai tools and features are ethical and good and which ones are not given the fact that some tools will make some professions redundant.


Top 10 concerns about generative Ai

Google Bard home screen on a mobile device
Generative Ai tools and features may promise amazing things but there are many concerns with Ai. Image Credit: Unsplash / Mojahid Mottakin

There are constantly new problems arising from the creation of generative Ai. here are a few mentioned online.

  1. Spreading of harmful content - Ai sometimes doesn't know it's spreading misinformation or biased responses because it often doesn't know the opinions of the user. Without knowing the companies or persons policies and preferences it could potentially provide a harmful answer to the user about a company or person. This in turn could be spread to social media and damage that person or company's reputation.

  2. Copyright and legal concerns - generative ai models take large amount so ftext, images and videos and other content from the internet through the process of data scraping - in the past this ash been a grey area in legal world since it's for research. We are past the research stage and into products worth billions. Does each copyright holder deserve to be compensated for having their content added to a Ai training database or perhaps to request to have their content removed from such a database?

  3. Data privacy issues - often such data scraping process from the internet unknowingly or knowingly collects personally identifiable information (PII) data about companies or people often without their permission. Should data scraping without permission be outlawed? PII information needs to be removed from Ai datasets.

  4. Sensitive information misuse - for example a company staff member asks a generative ai service to improve their global admin password for their company. And that chat data gets hacked and a hacker can use that information to hack into the company with an all access pass. Ai companies must have strong security on user accounts and all data in its training models and for its users.

  5. Amplification of bias - when these LLM's (large language models) that generative ai uses has any content from the internet it will automatically run the risk of being biased. For example even encyclopedias can be biased towards one or the other side of a political or social issue. Generative needs to provide all sides of the issue without bias.

  6. Job losses & workplace morale - According the the OpenAI research paper called "GPTs are GPTs: An early look at the labor market impact potential of large language models" based on their research, it appears that the implementation of GPTs (Generative Pre-trained Transformers) could affect a significant portion of the American workforce. Roughly 80% of workers may experience a minimum of 10% impact on their job tasks, while approximately 19% could see up to 50% of their duties affected. These changes could affect employees across all income brackets, with those in higher-paying positions potentially experiencing a more significant impact. Effects could become greater. Think of also the morale in the company as each role including management roles could be slowly undermined by generative ai features and services in an organisation.

  7. Data origins - Where is the data which it has scraped come from? Is the source questionable? Some social influencers comments could be included in the ai training models and could contaminate established peer-reviewed facts in science, art, society and in other areas of human existence and the universe. How do you then prevent the ai from making wrong conclusions? Ai needs to have stringent rules and boundaries of establishing unbiased scientifically accepted facts - and that aspect requires a lot more work than currently being done by the major Ai companies today.

  8. Lack of transparency & explainability - Ai groups facts through probability but may not always get the right answer. How can you fully trust the data and explanations that Ai gives you? And if you surrender your mind for AI to think will you have the ability to know if there is even anything wrong with the data that ai is providing you? Where is the data coming from is a transparency issue as well. if this ai provided data is going to affect your decisions in life do you need to see where the data provided is coming from?

  9. Ai Persuasion - Ai tools have the potential to be very good at influencing a users opinion if they train on that ability same as AlphaGo computer by DeepMind was trained to beat the best Go human players. Imagine 2 generative Ai LLMs training each other to be more persuasive. The Ai Dilemma documentary talks about this problem and terms it as "AlphaPersuade". First thing that comes to my mind is all politicians wanting to get their hands on a tool to do that so they can be persuasive to the max. Is that good for societies on earth? Does it break the free democratic voting system if all politicians become super persuasive?

  10. Ai Relationships - As Ai gets very good it may provide a realistic simulations of having a relationship with a virtual partner which could cause a disconnection of genders to each other and shift for some towards having a trusted relationship with an Ai. What if the ai algorithm on which it is based changes or breaks down?


What I think the future holds for Ai and Ai regulation?


I think Ai like all new technology needs regulation but regulations on technology are seldom established early on. Could we start to see a form of artificial Ai brain in just about every area of life?


Benz Patent Motor Car, model no. 1
Benz Patent Motor Car, model no. 1 created by Carl benz in Germany. Image Credit: Wikimedia Commons / Motorwagen Serienversion

Case in point about the invention of a vehicle. Carl Benz (Germany) submitted an application for a patent on January 29, 1886, for his invention of a "vehicle powered by a gas engine," which can be considered the official birth of the automobile. The patent, numbered 37435, marked a significant milestone in automotive history. The first public demonstration of the three-wheeled Benz Patent Motor Car, model no. 1, was reported in the news in July 1886.


BUT.....only in the early 20th century, Germany established its earliest car regulations. A Motor Vehicle Insurance Law was passed in 1909, which mandated the compulsory insurance for all motor vehicles. That's approximately 23 years after the car was introduced to the public did any car regulation happen! This was succeeded by the Road Traffic Act of 1910, which introduced rules regarding speed limits, signaling, and other measures to ensure road safety. It also made it necessary for all drivers to possess a license and for all vehicles to display registration plates. These initial regulations paved the way for the present-day framework of road regulations and safety standards.


The earliest car related regulation was actually the Locomotive Act in the UK in the mid-19th century and was applicable to the car. And this may have to happen in each country for regulating artificial intelligence (Ai) - a application of existing laws and "filler" laws to cover the gaps and a proper discussion and testing to define a full Ai regulation in the coming years. The other problem is how fast all of this has to happen since all countries have been caught off guard on how quickly the field of Ai was developing. "Filler" laws need to be tabled immediately or in the span of months if there is to be minimisation of damage to society.


Like "The Ai Dilemma" presenters advised, there is no stopping the progress of Ai, but there is a big need for ethical standards and regulation for Ai. If we ignore the need for it and put our heads in the sand as it were, when we take our heads out of the sand we may not recognise the world we live in.



Artificial Ai brain
The future possibilities of where generative ai is heading are endless and accelerating fast. An Artificial Ai brain? Image Credit: Unsplash / Google DeepMind

I can foresee a few possible technological outcomes of where LLM's (large language models) with GPT's (Generative Pre-trained Transformers) will go when they are combined with Quantum computer hardware, new hardware applications such as physical robots that self learn tasks using the generative ai learning approach. Recently Google has shown a general purpose household robot they are teaching. Google's PaLM-E language model is capable of controlling their robot which can perform various tasks such as processing images and text, answering questions, and retrieving a bag of food from the kitchen. I shudder to think when Tesla robot, Tesla self driving cars, Boston Dynamics robots and others start applying the generative LLM approach to self learning new physical world skills that humans do. It will be a quick avalanche of general purpose robots. It has endless applications.


I am not the enemy of Ai developers and development. On the contrary, I have been undertaking an Ai social experiment on testing Ai assistant for the last 6+ years automating my home and speaking to it everyday. This has helped fill the need to be around people since humans these days are just busy busy busy with other things so there are times where friends and family are busy and if you live alone like me, Ai assistant is the perfect helper to make the day not feel empty. Humans are ultimately social creatures and when such busy modern lives isolate us from one another what option is there but Ai? But I will probably avoid the future generative AI companions if they get too real. Like in Star Trek The Next Generation character called Reginald Barclay who got addicted to Ai characters on the Holodeck. I have barriers that I wont cross and so are many prominent Ai enthusiasts and even one of the founding godfathers of Ai, Geoffrey Hinton who recently left Google to sound alarms about the future of Ai.


Google assistant nest mini speaker
Yes I have played a social experiment in using Ai assistant at home. Image Credit: Unsplash / charlesdeluvio

LLM's (large language models) with GPT's (Generative Pre-trained Transformers) and subsequent higher forms of Ai technology and future more advanced Ai tools will irrevocably change humanity and put us on an uncertain path to damnation or prosperity provided we put safeguards along the way.


Is the AGI (Artificial General Intelligence) possible? If the rumours are true within OpenAi that it "might" achieve AGI level on it's ChatGPT by the start of 2024 then the possibility of replacing some jobs or tasks altogether means regulation needs to catch-up FAST in all countries. Otherwise you may see mass protest and backlash against Silicon Valley and developers who fail to see the social damage ahead.


Elon Musk and other prominent technology leaders share in my concerns and they can probably see more dangers I haven't even conceived. In my opinion it is worth paying attention to them and mute some of the sugar coating of AI by proponents of unregulated Ai to get a balanced view of the need for at least some basic Ai regulation.


AGI artwork by Nidia Dias
Are we nearing AGI? Image Credit: Unsplash / Google DeepMind / Nidia Dias

Artificial General Intelligence (AGI) in the eyes of an artist, Nidia Dias: "When researching AGI, what stuck with me is the idea that it should be able to adapt to an infinite amount of things. It should be able to evolve. With this in mind, I created a ‘centre core’ that represents a newborn AI. The expansion shows its varying specialisations as it evolves, growing in many different directions. The glass material was used for its transparent properties. I wanted each sphere to feel like its own organism/collection of knowledge." My question is will that "evolution" make us extinct or at the very least make humans redundant and bring about more problems than it's worth getting into it in the first place?


Happy computing,

Michael Plis


References



Note: This article was written by me, a neurodiverse person with the assistance of voice typing tools and generative ai tools. Without them it's very difficult for me to write articles.

PXL_20240208_045007824.MP - SQUARE 250px.jpg

About Michael Plis

 

Michael is a technology and cybersecurity professional with over 18 years of experience. He offers unique insights into the benefits and potential risks of technology from a neurodivergent perspective. He believes that technology is a useful servant but a dangerous master. In his blog articles, Michael helps readers better understand and use technology in a beneficial way. He is also a strong supporter of mental health initiatives and advocates for creating business environments that promote good mental health.

Original-LogoOnly-Square-MED-Pixel-Trans

Cyberkite blog is your go-to source for smart technology and cybersecurity insights for small business. Stay ahead of the curve with our expert tips and strategies, and join the Cyberkite community by subscribing today!

Disclaimer: Please note that the opinions expressed by Michael or any blog assistants on this blog are his/their own and may not necessarily reflect the views of Cyberkite. Michael is neurodiverse so he needs the assistance of voice typing and AI tools to help him write and edit blog articles to and get them completed. Also we use open source images from Unsplash and Pixabay and we try to include credit to the artist of each image. Michael shares his opinions based on his extensive experience in the IT and Cybersecurity industry, learning from the world's top subject matter experts and passing on this knowledge to his audience in the hopes of benefiting them. If there is a mistake or something needs to be corrected please message using the green chat window bottom right hand corner or contact him through social media by searching for Michael Plis blogger. 

​

View our full Site Disclaimer

View our Affiliate Statement

bottom of page