top of page

Open AI: Chat GPTNavigating AI Ethics for a Responsible Future


(Official Banner of Open AI: Chat GPTNavigating AI Ethics for a Responsible Future Session Edited by: Rolyn May Galvez and Kevien Anthony Cunanan)


In 2022 with the introduction of Chat GPT, AI was seen as an innovative tool that the public had access to. The computational powers of AI and its almost instant responses given by AI tools, provided enhanced productivity and versatility. Prior to 2022, some of the noteworthy AI tools that we have used are: Virtual Personal Assistants like Apple’s Siri, or Google’s own Google Assistant; Recommendation Engines that are the algorithms used by companies like Netflix, Lazada, and Shopee to analyze user data and behavior and make personalized recommendations for movies, TV shows, and products; and many more.


As artificial intelligence advances at a rapid pace, it is becoming increasingly important to consider the ethical implications of this technology; that being said, on the 18th of April this year, 2023; the Institute of Corporate Directors held a hybrid seminar regarding the Ethics of using Artificial Intelligence held at Discovery Primea located in Makati City and also via Zoom Call featuring Dr. Rafaelita "Fita" M. Aldaba, Undersecretary of the Department of Trade and Industry's Competitiveness Innovation Group, Dr. Erika Fille Legara, Aboitiz Chair in Data Science at the Asian Institute of Management, Mr. Henry R. Aguda, FICD, Senior Executive Vice President and Chief Technology Operations Officer of Union Bank Philippines, Dr. Benito "Ben" Teehankee, a professor of Business and Ethics at De La Salle University, Dr. Peter Sy who is an Associate Professor in the Department of Philosophy at University of the Philippines, Diliman Campus, and Dr. Christopher P. Monterola, a professor Aboitiz Chair in Data Science Head, in Aboitiz School of Innovation, Technology, and Entrepreneurship at the Asian Institute of Management. The discussion was moderated by Mr. Ricardo N. Jacinto, FICD, Chairman of SBS Philippine Corporation.


In this article, we will explore the four key aspects introduced in the said seminar


(1) The Ethical Problems of AI. The integration of artificial intelligence (AI) into our daily lives highlights the urgent need to address the ethical challenges that arise with its use. These challenges involve bias, privacy, accountability, and transparency. Ethical considerations are crucial to guarantee that AI is employed in a manner that's equitable, just, and responsible. A striking incident occurred during the seminar when Dr. Teehankee disclosed that his writing class had nearly all students submit AI-generated essays, which poses a dilemma that shakes the foundation of the educational system. Another glaring issue is the recent events of a giant technology company leaking information to AI systems without their full knowledge of it, resulting in leakage of sensitive company information.


(2) Responsible Development for Utilization of AI. Ethical considerations for AI have the potential to impact various sectors, including healthcare, finance, and criminal justice. Ethical guidelines and regulations can ensure that AI is used in a way that promotes equity, safety, and well-being for all individuals.

(3) Limitations of AI. In the seminar, Dr. Legara performed a live demonstration showcasing the limitations and inaccuracies of information displayed by ChatGPT when using AI prompts. Despite the fluidity of responses, the accuracy of information provided did not even reach 50%. While ChatGPT may give confident responses to user prompts, it's essential to note that users should always be mindful of the accuracy of the information provided. Furthermore, reports have surfaced regarding ChatGPT's tendency to provide incorrect responses and even offer references that are unattainable or potentially fabricated.

(4) Future of AI Ethics. To navigate AI ethics for a responsible future, it is imperative to establish clear ethical guidelines and regulations for the development and deployment of AI systems. This requires collaboration between policymakers, industry leaders, and experts in various fields. Additionally, it is crucial to invest in research and development to ensure that AI is designed with ethical considerations in mind.


To conclude, Chat GPT's capabilities are boundless, depending on the quality of prompts and the gravity of information available in its database. Nevertheless, it is crucial to take into account its limitations, particularly the ethical issues associated with its use. Chat GPT has a tendency to produce fabricated information, mainly due to its limited sources, and lacks a definitive way of providing a validated source. Another issue it poses is its impact on education. However, as AI continues to evolve and obtain a way of validating information, it may be applied to the workforce to accomplish tasks that are otherwise time consuming but necessary. With Chat GPT's prompt-response system, such tasks may take less than a minute. Even in writing this article, which usually takes hours of work, Chat GPT could produce it in seconds. It begs the question: was this article even made with the use of AI?



The following are the questions that participants had during the commencement of the event.

Open AI: ChatGPT Questions and Answers

Question 1

(Question) David Hardoon, Aboitiz Data Innovation Pte Ltd: What is the potential impact, thinking of it, when dealing with very complex generative thoughts? Example is trip recommendation (google maps), imagine everyone asking for the same location and getting the same info/recommendation. Are we realizing this potential implication of giving us all the same recommendation, same use, and same perspective?

(Answer) Dr. Benito "Ben" Teehankee: Well, I think the way it was explained is a stochastic thing, it will never give you exactly the same so that's the first point. Because it really has a random element inside which actually causes the hallucination (inaccurate or fabricated information). But, be that as it may, I think the average thing it will say will tend to converge and you're right, which is why I tell my students the only way to create value is if you don't sound like a chatbot. Because if all chatbots say the same thing there's nothing to differentiate individual talent, right? and it will begin to shepherd us into a standard way of thinking. We in Universities will really rebel against that very thought so I think the key here is to understand what is the underlying engine and to know that it is just really making a statistical prediction based on averages. I suppose one way of thinking about it is like a blender, if you put all those fruits together you blend them and after a while, they begin to taste the same right? But you know that you can add your own personal flavor there and the key is to hold on to humanity and just use it as a tool because it is actually what it is. But the problem is, it is really under regulated now. Remember when electricity was invented, the power generation was started in the 19th century, they had regulation less than 3-5 years when power generation is up. For now, software engineers are really not here to us, specifically, professional ethical code, and this is a real crisis and is up to the managers of the businesses to tell to its people inside we're setting up AI infrastructure within the company. The government will say the setup is a common regulatory framework so we can really get the benefits of this and not become all of these "carbon copies" of what AI generates. I called it the McDonald’s Decision of output. Remember when Jollibee is the only local burger chain that you know competed well against McDonalds, so we can do it, we can be unique because we are Filipinos after all.


Question 2

(Question) Ma. Aurora Geotina-Garcia, Mageo Consulting, Inc.: How do you protect Intellectual property? How does it impact intellectual property? Is there such a thing now something you own?

(Answer) Dr. Peter Sy: there are a lot of current case launches against Mijourney, for instance, there’s an outright violation of their photo stock. But then, you can look up intellectual property in terms of protection vs. management lens and many of the companies are on the protection side which is I think rather a reactive way of dealing with intellectual property. The more proactive ways to really manage your intellectual property, despite the violation, you still create values out of those properties and maximize its use to your intended audience. So, I think that’s one of the ways. The way forward is to see how much can be appropriated by the creators themselves, even those artists have been complaining about, you know, human art vs. Machine art, which can be called beautiful. But this is no competition when you privilege certain creators and certain ways of doing art that you get the most of what they do.

(Question) Ma. Aurora Geotina-Garcia: Bands and such, which could actually be generated from ChatGPT, it is not really original. It’s really going to be a challenge for their regulators, to really determine who owns what, right?

(Answer) Dr. Benito "Ben" Teehankee: Yes doctor, which is why I wanted to emphasize what Dr. Erica said earlier. You have to look up the entire value chain of what AI is generated and it starts with the data. So, companies using AI, you better check where the data is coming from otherwise you expose yourself to eventual regulatory risks because the laws will catch up after a while it will be illegal to be using, as we saw with the data privacy setting, right? I mean for the longest time direct marketers calling me, even in the 90s and I was asking them “where did you get my number?” but there was no law, so I couldn’t sue them, but now you will be sued and we have learned that now that AI regulations will catch up. So, let's make sure that the value chain is clean, the data you have the right to use and so on. But, you’re right. In the short term the ones who make the properties will be victimized because the laws will be late in catching up. That’s why we're talking about the conscience of ethics, which usually has something to happen for the laws to catch up otherwise it will be too little too late.


Question 3

(Question) Patrick Parungao, UST Global Inc.: One of the phrases that caught my attention is “AI being a multiplier”, and I think right now not only in ChatGPT but AI in general has really been plugged into platforms like social media and the like. These have actually been used to actually change certain political views and being a developing country, specifically in the Philippines, I think has a lot of impact to sway certain societies and we talk about biases and the like. So, from an ethical perspective maybe to those working in the government agencies, how can we actually start picking off an ethical framework in the LGU’s or government levels so we can avoid certain manipulations like cases form Cambridge analytica on how they influence certain decisions. Because I think from the point of view of view that would allow corporations to now really use it as a tipping point to really push it all the way to the grassroots (basic) level.

(Answer) Henry R. Aguda: To me, they cannot regulate what they do not know. So, the first thing they should do is to try to figure this out. Interact with the private sector, there’s a lot of experience in the academe I mean organizations like MAP & ICD. To me, my advice to the government agencies is to learn it if you want to regulate it effectively and if you don’t understand what it is and how it can be used. Now, just to add to the multiplier effect also allows you to be more productive as a company, in our case, we use ChatGPT, mindful of all of those do’s and don’ts and pitfalls, as code generators. We ask for a certain product in banking to generate code and you can even specify if it was JAVA or Python and it is so neat. We use the base code and then you just customize and modify it and then you run it to your cybersecurity organization. If we had not done that, we would have armed an army of several programmers to write the codes. I’m partnered with David so he knows very well how we maximize every data component we gather in our organization and with an ethical lens we run it through an AI engine, not the ChatGPT but another one. But, see the private sectors are years ahead compared to the government in terms of understanding it. So, they just have to interface with LGU's, the national agencies, and government corporations.


Question 4

(Question) Alexander Corpuz, Xurpas Inc.: I think the reason why we still have jobs is we still need to validate whatever ChatGPT puts out there and I guess I wanted to understand the geeky side of whether there’s stopping the version of ChatGPT from citing authorities and sources of its information like a footnote or a book.

(Answer) Dr. Peter Sy: Yes, it already does when you prompt. You can be specific but still it is prone to hallucinations, you know it cites erroneous info. So, I think the ball part is pretty good already. It can already shepherd like generating outlines, for instance, I made a contract for authors based on Delaware laws and the ChatGPT creates how to check if the lawyers are perfect. So, in that regard, your job is threatened.

(Answer) Dr. Benito "Ben" Teehankee: I think it is more important to distinguish large language models which is what we’re talking about now with other systems used to correct its limitations. So, when we speak of an AI, the LLM component is just a small piece of it. But by definition, LLM will always hallucinate, because remember there are stochastic, there are autoregressive, they just make a prediction based on what they’ve seen before. But what they’re doing is what we call reinforcement learning so humans spell it, don’t say that or see it this way or in the case of ChatGPT it will show you a source which is extremely important. But, just like Erica said, ChatGPT always gets my background incorrect and it is 90% false and I’ve been waiting for it to because it is not true that it learns the data in ChaGPT ended in September 2021. It learns within the dialogue, but when you restart it's like it never learns it’s like a child that keeps having amnesia. So that there are some notions about it that we need to correct about it, that’s why I like the talks of our two Data scientists here because they’re saying that LLM’s really have those inherent limitations we need to augment it with other things, and the most important augmentation is common sense. Let’s not project into it that it will never be had, it will never have, and it cannot have, but there are other systems that we can use to compliment it and augment it.


Question 5

(Question) Armand Camacho, Independent Management Consultant: In America, our banks have been sued because they are using AI generated results that are biased against minorities. In self-driving cars, there are a lot of accidents here, because the car can’t recognize a barrier or a jaywalker. Now, if we start using this in businesses, you know investing hundreds or millions of dollars in businesses is scary isn't it? It is creating a new profession we call Algorithmic auditors. In my conversion with Erica, the Philippines may take 5-10 years to produce trained algorithmic monitors, in the meantime, what could the businesses and the government do to make private businesses feel comfortable with using AI if don’t have these algorithmic monitors to look up whether or not the AI models are correct or the millions of training data are correct. If you don’t have enough algorithmic monitors, what do we do?

(Answer) Dr. Erika Fille Legara: For the meantime, this is a good first step not to flatter ICD & MAP. But we do need to start talking about this and we do need experts, we cannot let just anyone. I wouldn’t say it if someone says they're a (data) scientist and knows how these things work because you can now get data scientist skills if you go online, right? I’ve done an interview recently with another data scientist who said that he implemented neural networks but then I asked him a question of what is an activation function and he couldn’t answer, that was a serious problem. I think as a first step is to have this kind of forum, exchanging best practices while the government is catching up and while we don’t yet have algorithmic auditors, where Chris and I are actually in that space and we’re helping some companies already. But yes, you don’t need some ethics officers yet anyway but at least have people or a committee to start thinking about this, that’s why ICD & MAP is here.

(Question) Armand Camacho: I attended a presentation from Stanford researchers and professors, and the result of the model is that 60%-70% accuracy rate, is that a little bit lower?


(Answer) Dr. Erika Fille Legara: It would depend on what system you’re looking into. let say, if you have 10 different categories to choose from, random is 1/10, right? There it is what we call the proportional chance criterion, where we have to be better on that one. So, it is really on a case-to-case basis and that’s really why we need experts to be able to say that this is a good metric and even among the different measures, someone has to decide that this is the right measure. For example, recidivism, right? That is a classic example looking at a person determining the sentence, sometimes they use proxy-metric such as the propensity that this person would do crime, is that a good metric to determine the sentence? So, the metric, sir Armand is one and also depending really on your use case like number of samples you have or categories you have before you can ultimately say that this is good or bad. But, it is a very good question. Chris and I have seen this many times that even data scientists don't know this rule of thumb on how to determine a good model from a not so good model.


Question 6

(Question) Kevin Vosotros, Trinity Insurance and Reinsurance Brokers Inc.: How can MSMEs, who don't have deep pockets, take advantage of ChatGPT and AI?

(Answer) Dr. Christopher P. Monterola: Well, ChatGPT is quite cheap. But in general, how can they take advantage of AI is probably the question and this is exactly the point of the national AI roadmap where Erica and I crafted for DTI with other stakeholders. Our idea is to have a national AI center which will be a one stop shop for MSMEs, whatever AI application they have. We want that AI center to house the best, the brightest, data scientists that we have in the country and we will be able to scale up this particular component and in fact Henry lends to give us millions of dollars to set up this private center. Because, this is the most important component and I think David also mentioned that we need to invest in regulation on this component. For example, in the problem of David, there is a very strong Regulatory committee and it is possible for us to make sure that this AI will direct the traffic in a manner that the global commuters will be able to minimize their transport and actually this is one of the things that we have done when we were in Singapore, to help the regulators who are using ways, the government can partner with ways so that can be minimize the spread of traffic.




153 views0 comments

Recent Posts

See All
bottom of page