Working towards a future where people matter over profit

Working towards a future where people matter over profit. A conversation with Jean Linis-Dinco
A conversation with Jean Linis-Dinco

The Road Less Traveled. Exploring less usual careers in human rights - Episode 17

Working towards a future where people matter over profit

Jingle Laura María Calderón Cuevas
Interviewer Véronique Lerch
Editing Brua | bruapodcasts.com

Transcript


Jean Linis-Dinco 00:02

Two roads diverged in a wood. And I, I took the one less traveled by and that has made all the  difference.

Véronique Lerch 00:18

Our guest today is a trailblazer in artificial intelligence and data governance. Jean is a human rights  activist, academic and data scientist from the Philippines. She is currently doing her PhD in  cybersecurity at the University of South Wales Canberra, focusing on the analysis of government  propaganda, and disinformation in the context of the Rohingya crisis in Myanmar. Welcome, Jean.  

Jean Linis-Dinco 00:47

Hello. thank you very much for having me.

Véronique Lerch 00:49

Thank you for joining us. I saw that you blinked when I say Trailblazer. So you're not? Well, I have to  say, you know, I didn't read your full bio. And you have already been honored for your work in many  different ways. But just maybe mention this so that people understand why say Trailblazer. Last year,  you were honored as one of the top 100 Women in artificial intelligence ethics globally for your work in  the field of technology and human rights. So yeah, I think we can say Trailblazer, but we'll see. We'll  see if we can confirm this by the end of the interview. So, Jean, I think what is really interesting for the  people listening to us is always to understand the way you get the people we're talking to the way they  get engaged in human rights, the way the weight started for you. And you know, when because you,  you, you really, you everything I've said, you know, you're a scientist, you're a researcher, but you're  also an activist. So when did this human rights, interest and passion started? And when did you start  identifying it as human rights? Maybe?

Jean Linis-Dinco 02:02

I would actually usually, this word was off as a human act of human rights activist first and foremost.  And then the other addendum is just add, because I was human first before everything else followed.  So well, good question on how it actually originated, I actually have been, you know, thinking about that  for a while now. Because back home in the Philippines, we did not have a clear understanding on what  human rights were on what actually classifies as human rights, or, you know, the education gap on, on what human rights actually are, is very big, that you can actually, you know, you can go on, so we didn't  ask people what it is. And they would just probably say, you know, along the lines of democracy and  voting, and you know, all those kinds of things. But growing up in in a small town in the Philippines, I'm  from, which is the south Maragondon, Cavite province, the province just adjacent to Manila. So I've seen how people from poor backgrounds are often treated differently, and looked down upon by the  society. So being from a poor family myself. Now, I saw how easy it is to be invisible in people's eyes,  you know, and this invisibility often pushes people further towards the margin of the society until they  fall off the cliff and where the loop of the poverty starts. Now, my grandmother, she was a strong and  influential figure in my life. Now, she instilled in me a sense of justice and fairness from a young age. I  know this part of being in from a disadvantaged background, shall we always tell me that just because  someone treats you doesn't mean that you should buy the most expensive thing on the menu, you  know, back then actually just brush it off as some old person say. And through time, I just realized it  actually meant, oh, it taught me the importance of fairness and equity in a society that often treats  people differently based on their economic status. Yeah, it's easy to forget that everyone deserves the  same opportunities. And you know, human rights, of course, are essential, because they ensure that everyone, regardless of their backgrounds, or social status, are treated with dignity and respect. So I  guess that experience growing up and my grandmother's advice have somehow shaped me or my  perspective on human rights and it has fueled my passion for advocating for marginalized communities  specially those whose our voices are often not heard in big conversations in the society.

Véronique Lerch 05:12

Your grandmother sounds like an interesting and powerful character. And very often, with the people  we speak to, I think families do play a role in terms of shaping the community around you in terms of  shaping your, your values. And I think it's interesting because it's a lot about the values now. And as  you said, it's about being human first, and then and then you bring the human rights into it. So when did  you start identifying it as human rights? Because then you did, because you did a master's in human  rights. So you must have heard about human rights along the way before.

Jean Linis-Dinco 05:49

So during my bachelor's degree that was in the Philippines, I actually, you know, I know of human  rights, because we are taught about reading the Philippines Constitution, but then our focus of human  rights back home are very political, you know, like, are, can you vote? Can you participate in a rally or  protest, but then you realize that actually, human rights go deeper than that, you know, it's more than  just political, it also involves the right to have food, the right to be, you know, to have clean water and  what not. And I think that's, that's when I realized that human rights from my old little bubble, it's not just  the human rights, that it is, because it goes beyond that. It's, you know, it encompasses the human  experience. And I would say that the human rights degree, as you've mentioned, have actually made  me realize that in one way or another, as especially on how it involves economy, just moves beyond  politics and social and move towards the socio-economic side. And that's, that's, that's when I start  working towards, I would say, a more progressive approach to it than then what I originally learned  back home.

Véronique Lerch 07:20

I was wondering, because whether you understanding of human rights has changed since you got your  degree, especially because you're spending so much time now looking at studying artificial intelligence  and data governance and, and everything related to, I think the negative aspects that we hear about  from regarding the application of artificial intelligence on our lives, and the negative effect it has on  human rights is linked to capitalism, neo-liberal, let's say, more neo-liberal policies, at least. And and I think some people would say that human rights hasn't been able to limit the rights of those policies and  the use of AI. So do you think that I mean, did your understanding of human rights changed since your  since you started?

Jean Linis-Dinco 08:14

Yes, absolutely. So it changed a lot, because back then my understanding of human rights, as I've said  it focus too much on identity politics, you know, identity politics, while instrumental in highlighting the  unique struggles of marginalized communities and promoting social justice. It has also been co-opted  by the capitalist class, to divide the working class further. Now, the divide and conquer strategy has  sowed discord and mistrust among the working class, making it more challenging to unite against the  real issues that affect the Society. Divide and conquer, make the working class hate each other so that  we overlook what's really happening around us climate is getting more and more awful, or big oil  industries are burning fossil fuels everywhere. But what's the media talking about? transgender rights?  Yeah, they're focusing on making people hate trans-people these days. And he, you can just simply go  on Google and put trans-rights on on, let's say, on a search bar, and you'll see how many, you know,  both left and right, media organizations have used this topic to further divide us and yet, and we'll also  see how poverty has completely risen throughout pandemic while companies are earning record profits.  So I would say it really did open, you know, do human rights degree that I got was probably a stepping  stone into me reading more about, you know, social political economy on how it actually affects people  beyond race, color and creed, but more specially class.

Véronique Lerch 10:19

Yeah, that's interesting what you're saying so, because you're using the Human Rights lenses to look at  AI. So do you feel that it is a useful tool because we are to regulate AI is that the way forward to  regulate AI using your rights based approach? Or what were human rights may be part of the problem  with what you mentioned, which is a little bit this, this marketized version of human rights. I think  Professor Barry Sanders, like I met recently was talking about this, you know, this way individual  approach of human rights against the collective outcomes of AI. So do you feel that it is the right tool to  try to regulate the negative effects of AI?

Jean Linis-Dinco 11:11

That's, well, it's not the best tool, but it's the only tool we have.

Véronique Lerch 11:19

I've heard that before.

Jean Linis-Dinco 11:24

Yeah, but the rise of the Internet and in the digital technologies in the early 90s, actually initially  promised the same thing that it will democratize access to information and resources, effectively  flattening hierarchies and empowering the working class worldwide now. But this optimistic pseudo  optimistic vision has not actually been fully realized. Yet we've seen the how the consolidation of power  in the hands of the few with the wealthy wealthy few individuals and corporations reaping significant  profits at the expense of the ordinary users. So this outcome raises important questions about the  governance of AI technologies, prompting a comparison between ethics based or human rights based approach, while I am a human rights activist, and of course, my bias would be on the human rights. And  I feel like an ethics-based approach to AI governance relies on the development and the  implementation of moral principle, no, that guide the behavior of AI developers of AI researchers of AI  users, so it's more like self-governance, just here are the guidelines of ethical guidelines that we follow.  And usually, they feel like that's enough. But a human rights based approach to AI governance  emphasizes the protection in the promotion of universally recognized rights, such as freedom of  expression, non-discrimination, privacy, and freedom of thought, which is, it's not often talked about in  the discourse about AI, but you see how misinformation has ruined the foundations of many, many  democracies worldwide, it has actually violated the freedom of thought of many, many individuals and  freedom of thought, as we both know, is an absolute human rights. So the early internet, I guess, the  failure of the internet in the early age to fully democratize access and opportunity, kind of feel like a  cautionary tale. For AI governance, as with the internet, AI has the potential to either empower  individuals or further concentrate power in the hands of the few. And I hope this time, we make it better.  We learn from our mistakes and work towards a more equitable and just future word. The benefits of  artificial intelligence are shared by all.

Véronique Lerch 13:58

Oh, and I have two questions to follow up on this. One is, you know, because, yeah, I mean, I listen to  you, and then I get very excited, and I think, optimistic. But then I remember the way you know, those  companies talk about AI and and the tools that they develop. And because it's so few people  understanding the way AI really works. It is sometimes difficult to apply the rights based approach to AI.  Because sometimes, you know, the way we we do it is maybe too superficial because we don't  understand the way it's done. And sometimes it's easy for those companies to push back and, and and  push back using the argument of our ignorance as human rights activists, saying, "Well, you don't really  understand", but actually most of the times information is actually not public. But friends and some of  the algorithms are not public.  

Jean Linis-Dinco 15:06

It's blocked box.

Véronique Lerch 15:08

Yeah, yeah, exactly. So I'm wondering, the My first question would be, do you because you you have a  lot of knowledge on the AI? So do you feel like you're taken seriously when you enter in conversation  with companies and the private sector and the people with power in terms of AI? How did you build  that? Did you build that knowledge on AI and data governance?

Jean Linis-Dinco 15:35

You may? Sorry? Are you asking me if I'm being taken seriously, by corporate?

Véronique Lerch 15:39

Oh, yeah, both things. So whether you're taking seriously and how you build up all this knowledge on  AI?

Jean Linis-Dinco 15:48

I guess that's yeah, that's a very interesting question. Because I usually don't, as much as possible, I  only tried to engage with nonprofits. And it was, you know, I do work, for example, for ILGA Asia, on  cybersecurity and digital security, and also as a consultant. So as much as possible, I try to limit my  work in in that section to where I feel like I'm needed more, many people, you know, I could have  actually applied for a job where it pays, you know, like in a tech job, and it pays triple, quadruple the  amount, but does it actually serve me? No, it doesn't, you know, at the end of the day, I wake up, and I  go to sleep with just sad eyes thinking in the world. And did you make you question? Did I actually  contribute something to the world today? Or is it just, you know, my constant desire to be relevant has it  been fulfilled, or whatnot.

Véronique Lerch 17:06

I was wondering whether, you know, you must be in panels or in discussion with people from the  private sector? And then what is the reaction? Because in those settings, you know, when we're trying  to we're trying to lobby and advocacy, you know, to advocate for this better future, you have to be in  touch with them because they have the power. So we can't just ignore them. And just working in the not  profit. No, that's great. And I think yeah, great like this, this point that you're making about  communication is extremely important, because we've heard that from other people as well. It's  important to we can do human rights, we can communicate about human rights without having to  mention it. You have to you have to see you know, what works with your with your audience. I just want  to come back to something you said. So my second question to what you said before, would be you  said AI, you know, your hope would be that AI would work for everybody. And that it would be that the  benefits of AI would be shared fairly. So I am wondering whether you could give an example of, you  know, the way human rights let's say, AI could work to advance human rights and could benefit  everybody.

Jean Linis-Dinco 17:34

silos? Yeah. Yeah. So there's, I've been in lot of conversations where, you know, panels and  discussions with, which include a lot of people from the private sector, but usually, at least there would  be people who would understand what I'm talking about. But the good thing is actually, when you don't  talk about when you tell them things that is very apparent, but you don't measure human rights, and  you don't measure capitalism, they actually listen. So it's just, I guess, it's just finding the right balance,  and how do you actually communicate, so I think that I owe that to my bachelor's degree that I'm, I'm  I'm actually a good communicator, and I'm, that's the more than, more than anything else, because I  was I could, I could really, you know, find a good balance and on how to actually lobby or, or advocate  for something without where, you know, it's the same thing over and, uh, you know, if you don't mention  one name of a person that the book that they hated from 1800s, they wouldn't mind. They did all  believe in it. They all believe in the idea that, you know, everyone is equal, but then you mentioned  something that they feel like, you know, a word that's been demonized for centuries. And they were  like, Oh, my God, we don't want that. That's not happening in our backyard. So yeah, that's I guess  that's, that's about finding that, you know, the balance in how you actually communicate without  changing your perspectives and what you're actually fighting for, because the best quality of a great  leader is that they're able to compromise. Um, so I've been working in machine learning. So the term AI  is a little bit ...

Véronique Lerch 19:25

I know. I just wanted to make it through it. And but you're right, that

Jean Linis-Dinco 19:51

we're Yeah. Okay. So in the field of machine learning, I'm, I'm did my, my, my main thesis in machine  learning and conflict forecasting, which is actually what I feel is a contribution to the human rights,  because that's, that's actually why I did my PhD in in cybersecurity and machine learning, because I felt  like, you know, we've always been talking about conflicts and conflicts, conflicts, their conflict in Asia, in  African in now in Europe, and then in Latin America. So there's always, you know, constant, as what  Immanuel Kant say, the constant status of man, is that to be within the conflict, and that's, that's real.  So that's when I felt like, no, maybe I should do, you know, a PhD on this on actually meant on trying to  predict when a conflict would happen. So I did that as my PhD and I, that's probably one of my, one of  my best, I would say, one of my proudest moments is to be able to contribute one way or another on,  on how the rohynga conflict is being analyzed in a global scale. But there are other other things where  AI and human rights can actually marriage. So now, as I've said, AI has the power to revolutionize the  global workforce. So it holds the potential to further exploit and alienate individual at the same time.  And this is where our job is human rights activists exists in the house in the wise, though, how do we  make sure that these tools in front of us are not the same tools that would kill us? Yeah, by harnessing  AI for the advancement of workers rights, we can ensure that let's say automation and other  technological advancement actually serve the interests of the poor and the marginalized, rather than  the elite class. Now, let's say we can have AI, monitor working conditions, detect wage theft, identify  discriminatory practices, help and create a more equitable and just workplace for all and now and how  do we do this? I hear you asked, now we can democratize the ownership and the control the you know,  to ensure that AI does not become a tool of oppression, we must strive to democratize its ownership.  By promoting open source AI technologies, cooperatives, worker owned enterprises, we can encourage  widespread access to AI resources and prevent monopolistic control by the rich people. And this  collective ownership empowers the working class to participate in AI decision making and benefit from  its advancement. Another thing that I could probably give us an example is when we actually establish  AI driven worker advocacy platforms. So you know, creating platforms that use AI to advocate for  workers can help level the playing field between the rich and the poor. Now, these platforms can serve  as what I would call this digital hubs for organizing collective bargaining. Unions sharing information  about labor rights and facilitating communication among workers, particularly now they're in we're living  in a gig economy, where most of the people don't actually know what labor that they're part of, that their  workers have this big company to do. But they usually don't, don't don't don't don't know that. And you  know, through this kind of platforms, advocacy platforms, we can leverage AI to actually analyze  complex labor regulations and translate them into actionable insights for workers and thus empowering  them to assert their rights and challenge unjust practices. And, you know, it's actually also probably a  lot better because I've been seeing a lot of union busting news very, very, very recently in many, many,  many countries around the globe and hopefully, you know, through, let's say, creation of regulation in  compliance with AI based driven regulation of compliance, we would be able to, you know, promote that  kind of conversation where where people can can safely join unions without being demonized or  punished.

Véronique Lerch 25:04

I think you're giving us a little bit of your hopeful vision of what a world where AI is used better. The  impact of of AI would have better consequences for especially the workers. But is there anything else in  terms of your you your vision for the future and your hope for the future? Is there one issue in particular  that you really feel like, if we got that right, a lot of other things are going to follow. And this is the one  thing I'm, I'm hoping for, and I'm working for, I'm working towards with the work I am doing.

Jean Linis-Dinco 25:46

I think at this point, I just want to see a future where people matter over profit, you know, we're human  lives are given more importance than the stock exchange, you know, but of course, as an activist. And  as a realist, you know, I cannot stress enough that our focus must remain on the present, there is no  future until we transform the now and it's where I live, where I anchored my actions and aspirations.  inch by inch, day by day, the present is our battleground and the place where we construct the very  foundation of the future that we desire.

Véronique Lerch 26:28

Oh, I totally agree. I just I just feel that sometimes as human rights activists to bring people with us, we  need to share a little bit of what would be a vision of a world where human rights are respected and  have advanced otherwise, we keep talking in a negative way about, you know, everything's not  working. And, and at one point, people find this really annoying. They will find us annoying anyway, but  it's okay. But it's still I think having a little bit of a hopeful vision is sometimes useful in terms of  communication as you are a communication expert You I mean, they're not a lot of women in your field,  I would say. And I'm wondering if, if you have any advice for, especially for women by maybe anybody  who would like to have a carrier similar to us working at the nexus of artificial intelligence, slash  machine learning, and human rights.

Jean Linis-Dinco 27:33

I guess for someone who is keen to work in the field of machine learning, or data governance, or just  machine learning in general and doing programming. No, I will tell you that as a human rights graduate,  you actually already have every soft skill that the market needs. Now, that is not something that they  can train after one seminar or two, the fact that they already care is one thing, and the rest will just  follow. And I understand that pursuing a career in this field can be very challenging, and at the same  times, daunting and overwhelming. But understand that we cannot solve all the problems in the world.  And we can maybe solve one if worse, if circumstances allow, and just be nice to yourself.

Véronique Lerch 28:17

Are you nice to yourself, you know, because I read what we're talking about it can be you know, I  mean? Quite heavy work and and quite depressing. So do you have any, anything that helps you that  helps you to, to remain grounded? Like a poem music? An activity? I don't know?

Jean Linis-Dinco 28:17

Well, depends on the specific mood. Yeah, if I feel like I want to empower I listen to Destiny's Child's  independent women, or Shania Twain's man and feel like a woman, or sometimes a dancer to the tune  of dancing queen of Abba, and yeah, but since this is a human rights podcast, I'll probably answer it,  then.

Véronique Lerch 29:01

You don't have to, if this is what helps you to be grounded, you know, to be grounded. You didn't have  to be human rights. That sounds terrible. You know, I mean, it really when we're trying to relax. So you  know, we feel like we this is what we need to say, oh, my god. That was definitely. So that answer is  perfect. I have a very specific question because it relates to human rights education. I've seen in your  bio, that you worked on a project on the gamification of human rights. Do you do you think that's one of  the solution to increase the participation of children and young people in human rights education or do  you see as a solution to do better human rights education to make it more accessible and more fun?

Jean Linis-Dinco 29:59

Yeah, Well gamification has recently has emerged as a popular strategy to enhance engagement and  participation in various fields, which includes human rights education. Well, that is effective in some  ways, because it increases the participation of children and young people. And it is important to  recognize that gamification is not the only solution. Now there's no one size fits. The no one size fits all  approach to human rights education. And as a self declared technologist, I understand the value of  technology. But I also acknowledge that the best tool to use is the one that is working. So it is not  crucial. So it's crucial not to gate keep, and avoid treating technology as the be all and end all solution to every challenge. Now, when colonizers arrive in the Philippines, they taught us that A is for apple,  but we've never actually seen apples, apples were not grown in the Philippines back then. And this is  where culturally relevant pedagogy comes in. This is where we develop educational materials and  curriculum that resonate with the students cultural backgrounds and experiences. Because it you know,  it can help them make human rights more accessible and engaging because you incorporate local  stories and traditions and customs into the learning process. And that's, that's what we did with the  gamification human rights. Now, it was in the Philippine context. So a lot, everything was all stories,  everything was in the Philippine context. And everything was written in different Philippine languages  and whatnot. So I guess the gamification was just a cherry on top, because it was just a little bit of twist.  But it could have been done without gamification, it could have been done by by actually telling people,  stories, fables that actually are already there, but just tested and know that it was human rights.

Véronique Lerch 32:04

Yeah, that's a great reminder, because we see that we indeed, see see that a lot of, you know, people  choosing what method or tool before, before adapting their their teaching to to the people in the  teaching the teaching to so that's a great reminder. And yeah, let's, maybe we're getting we're getting  close to the end. But I was wondering whether you could maybe give one or two more examples of the  work you do you did on AI, and machine learning. For instance, I think you worked on the role of  machines into into rising global transphobia. So you mentioned already the attack against transgender  people at the beginning. So maybe that could be a good example, or any other examples of the kind of  work you you did.

Jean Linis-Dinco 33:05

Um, so one thing I did was, so that paper that I wrote on, on on facial recognition technology, and how  it actually impacts transgender people, particularly those from the Global South, you know, the research  that we've seen in influential technology and recognition methods that detect queer people based on their photographs, or online behavior, which is pretty much born out of the hyper capitalist society that  seeks to micro targets people, so that they would just buy their products and tech companies now have  access to consumers, inner desires, political leanings, preferences, emotions, likes and dislikes. And  therefore it would be easier for these consumers to be targeted with ads that replicate their identity and  with stories that validate their political leanings. You know, and this is not often talked about, but I said  a while ago, this is a violation of our freedom of thought, which is an absolute right. You know, the  thought of wanting to detect someone, sexuality or gender based on data collected is creepy enough,  but this is actually tremendous impact if it ends up being used in countries where homosexuality is  criminalized and even punishable by death are countries where entities can use technologies to out  LGBT people or identify them in random photos they find online which can have horrifying impacts, that  their relationship with a society they can be bullied, they can lose their jobs, their livelihood, their lives,  you know, trans people can can are often securitize first presenting themselves in gender that is  incongruent with their with their their documents. And once we automate this process, it's just making  things worse. And this is not just me being alarmist because this is already happening in many  countries without the use of machine learning. So there's a recent newspaper, for instance, in Malaysia  has published a story on How to detect gay people based on their physical features. You know, some  and for trans people, we sometimes trans people might get lucky and find an airport officer who knows  what's going on. But automating this process rarely gives them space to those who do not fit in the  binary cookie cutter, if you could call it that. And this does not happen by chance, because it is often the  most powerful and direct shoe of the monopoly and how technology can be deployed to maintain the  status quo. And it is only a matter of when and not if this new technology comes to the hands of  regimes where being queer is punishable by extensive jail time. Corporal punishment and sometimes death.

Véronique Lerch 35:48

Yeah, I think it's, it's really eye opening to talk to you because I think you really realize in a way that  everything that is happening is in terms of you know, that when we talk about AI and machine learning,  sometimes, you know, we get a bit overwhelmed. And then, you know, it's we we forget to just look at  what existed before. And, and seeing, you know, the repression and the capitalist systems and  neoliberal policies are actually just amplified by the tools by those tools. You know, and I think we're  getting distracted. In some ways. I feel sometimes by just the technology as such. What do you think? I  can NG? I mean, can the privates not sorry? How can the the nonprofit be more prepared to work on  those issues? What do you feel the need?

Jean Linis-Dinco 36:40

At this time? At this point? I don't think I don't think any nonprofit is really, you know, prepared. I don't  think we all are. Yeah, it but it's it didn't come out of nowhere, it's probably just because we're not to,  you know, it just it came through years initially it was years in series of researches. But yeah, as much  as possible. I tell people and nonprofits included that, you know, as much as possible, try to use it,  because it will help us one way or another, but at the same time, be very cautious when using it,  because the data that you're sending is not as private as you thought. And be that, you know, as I've  mentioned a while ago, there is a gap between the connection between AI and human rights. And that  gap can be filled by human rights activists who are working in this field. And this is even more important  now than ever.

Véronique Lerch 37:45

And it's is that because of this sense of urgency that you decided to focus on, on this, and on that  technology on that aspect? Is there any other reason why you chose that path?

Jean Linis-Dinco 37:58

No, I think I actually chose the path even before the hype. So even before the hype so I yeah, I so I was  already doing work in machine learning, and programming even before. And I've actually liked  technology and probably growing up in now in, in house where I was technologically deprived made me  feel like yeah, I've kind of want to learn more when I was an adult. So I tried to pick up one hobby at a  time. And then that just turns out into like, oh, actually, this can be used for something good, then  actually just making some billionaires become trillionaires. Then I realized that Yeah, yeah, maybe this  is you know, that's why I did my PhD on this specific topic, because I feel like it can be harnessed into  something actually more meaningful than just thickening the linings of the pockets of some people.

Véronique Lerch 39:01

Well, I think the human rights community is very lucky to have somebody with your integrity and your  knowledge to help us see better what we can do and what we still need to learn and what we can still  do to change. Yeah, to change the way AI is being used. Any final thoughts or comments you or  something you want to share with the people listening?

Jean Linis-Dinco 39:31

I guess if I guess it's, I guess it's my message to everyone but it applies kind of to every one of their  careers. You know, as we venture as you venture in your careers, you know, we will become a little bit  far from the people that we intended to serve as you become managers of big organizations, inter governmental or not, you become a little bit far from from the people. And that's not bad because  sometimes real work happened behind the UNGA and behind the Human Rights Council. But I hope  that in spite of this, you will not change your perspectives in life. Because I see this often, when people  become in C level positions, they tend to just forget that the thing that they actually fought for. But  remember that, yeah, we are closer to poverty and homelessness than we are being to a billionaire.  And I want people to remember that. And to always remember the poor when you're trying, yeah,  whatever policy that is, whatever AI tool that is think of how it actually affects people, poor people,  because oftentimes, they are the ones who don't have a seat on the table. They are the ones who  cannot do voice or are silenced deliberately, because, well, people listened to them when it's election  times. But other than that, it's just a decoration and peep and as and yeah, as much as possible.  Maybe don't. Yeah, stand up for the for the for the little, man.

Véronique Lerch 41:21

Thank you so much, Jean, I think yeah, we should definitely always remember that. And I think we  might get lost sometimes when we start working. But then yeah, it's good to have this. Yeah, this this  idea that yeah, we need to remember who we're working for. So thank you so much, Jean for this  wonderful conversation. And I hope you know a lot of people are going to listen take this path.

Jean Linis-Dinco 41:49

Thank you.
 

Last update:

Links

Keywords

careers opportunities