Inno Yolo
  • Home
  • Why Inno Yolo
  • About the Founder
  • Contact
  • Impressum | Imprint
  • Datenschutz | Data Privacy
  • Home
  • Why Inno Yolo
  • About the Founder
  • Contact
  • Impressum | Imprint
  • Datenschutz | Data Privacy
Search by typing & pressing enter

YOUR CART

"I write what I like" - Steve Bantu Biko

15/12/2023 0 Comments

EI AI Act - Key Summary of December 2023 Agreements

Picture
The EU finalised negotiations, reaching agreement between the EU Council and EU Parliament on its Artificial Intelligence Act this past Friday. This means the parliamentary vote in the new year becomes a process formality as these two legislative bodies are in charge of approving legislation. As could be predicted - some say it hasn't gone far enough, some feel it has gone too far, whereas others are holding judgement until the technical details are worked out in full. The devil will unsurprisingly be in the detail. Nonetheless, organisations can start their preparations. Regulation is coming and violation could be expensive.

Summary of key agreements:

Applications of AI deemed to threaten human rights and democracy are classified as unacceptable and will be banned, e.g. emotion recognition in the work place and in schools; biometric classification of people using sensitive, personal or discriminatory criteria; social scoring; manipulation of human behavior that seeks to override free will; exploiting the people's vulnerabilities; 'untargeted' harvesting of facial images to build facial recognition databases and biometric mass surveillance. Surveillance exceptions are in place for law enforcement and even that has been limited to certain, serious crimes.

AI systems that could harm health; violate safety and fundamental rights; damage the environment and compromise the rule of law are classified as high risk and will be required to meet certain obligations, including undertaking impact assessments focused fundamental human rights, evaluating and mitigating against systemic risk, conducting adversarial testing (!!!) to minimise the risk of problematic or harmful outputs, e.g. sexist, racist, antisemitic, homophobic, classist or other discriminatory outputs as has been seen in the past (sometimes from organisations with the resources to do better). Further, citizens and consumers will have a right to lodge complaints and receive feedback on AI systems' impact on their rights.

Regulation of base models won out over self-regulation i.e. the decision was between regulating only the application or use of AI vs. also regulation the the underlying AI models: companies building AI models will be required to ensure transparency by drawing up technical documentation; adhere to EU copyright law (the devil will be in the 'how' here); assess and mitigate against systemic risks, ensure transparency - at summary level - about the data used to train AI models, report serious incidents and implement robust cybersecurity measures.

There is an attempt to work against the monopolisation of the AI market by larger tech companies in possession of deep stores of resources and larger market share through the provision, by national authorities, of infrastructure for small and medium sized businesses to develop and train their AI models and solutions before going to market (a tough goal to meet).

Fines of up 7% of global turnover or €35m are on the cards.

This post was originally posted on LinkedIn in December 2023. 

0 Comments

15/12/2023 0 Comments

Can consumers be a force behind responsible AI?

AI generated books are being used to scam book buyers and redirect earnings from the authors who have laboured over their craft to AI powered scammers exploiting loopholes on eCommerce platforms like Amazon.

While writing may be a labour of love, it is a skill that takes years to harness, requires authors to work hard, dig deep to give of themselves and, often, they must be vulnerable and face their fiercest demons and traumas to present us with books that we can fall in love with and learn from. They deserve to be fairly compensated and many have a hard enough time being able to live from their writing as it is.

From biographies about authors and famous people that have no bearing on reality to brazen scammers that simply publish AI generated books using successful author names.

Imagine, if you will, that while we’re waiting - with bated breath - for Chimamanda Ngozi Adichie or Margaret Atwood to release their next bestsellers, someone decides to help them along and take the proceeds (leaving us with an inferior paid for product, ruining the author’s reputation and creating difficulties for the distribution of the actual future novels).

Amazon has since introduced a requirement for authors publishing on kindle to declare AI generated content. While more extensive verification methods may be useful here, it’s unclear whether they are on the cards.

In another example of brazenly hijacking other people’s labour, a search engine optimisation (SEO) ‘expert’ proudly bragged - here on LinkedIn - about hijacking traffic from a competitor site by using AI to analyse the competitor site’s structure and content and to regenerate similar pages in the hundreds, if not thousands. Essentially, they „quickly“ recreated the original site and commandeered its traffic.

Said post wasn’t about illustrating the concerning misuse of AI, it was a post advertising SEO services - a “look what we could do for you” business post. I’ve previously posted about how baffling it is that people seem to think that the use of AI absolves them from existing regulations and laws.

These are extreme examples and - as I’ve previously posted - it’s not AI that we need to worry about - but rather the sinister amongst those building it, controlling it, selling it and using it. As consumers, we’ll increasingly be faced with products and services created / generated with and by artificial intelligence.

Soon a book will be presented to you, let’s say its a novel. It will move you…perhaps even mirror some of your own life story back to you - cause you to burst out in laughter or shed cathartic tears. It will give you that warm fuzzy feeling that comes from consuming good art in the form of well created prose. The experience might even be transcendent: You - changed forever by an incredible book. You will feel ever-so-grateful to the author for sharing their story with you.

Will it matter if that author isn’t human?

This post was originally posted in December 2023. 
0 Comments

15/12/2023 0 Comments

Congratulations to Dr Mamokgethi Phakeng on winning the WOMEN IN TECH®- Global Movement Africa's Life Time Achievement Award.

Congratulations to Dr Mamokgethi Phakeng on winning the WOMEN IN TECH®- Global Movement Africa's Life Time Achievement Award.

After being aware of her from the point that she took her role as UCT Vice Chancellor and having been a life-time member of the 3 am squad, I had the opportunity to facilitate a Leadership Conversation hosted by the Ekurhuleni Black Management Forum branch in April 2021. We had a meeting to prepare for the conversation and later had the interactive conversation with an engaged audience. Both were amazing interactions.

The lasting impression that I got was of someone with incredible energy, an amazing commitment to building the nation through building others and a humility together with an openness that was surprising for someone who had achieved so much despite the obstacles that she had faced. She shared of herself and her wisdom with us, as her audience, so generously and so passionately that we left the conversation feeling nurtured. That evening with us was an extension of the work that she had been doing as "Deputy Mother" - building, enabling, supporting and fortifying.

A such, I congratulate her on this amazing accolade. Dr Phakeng, South Africa's children have gained so much from all that you have given whether on campus, on social media or the various engagements at which you have given of your energy, experience and wisdom. Thank you.

The awards were hosted by Women in Tech South Africa in Cape Town last night, who from all reports did an incredible job.

Video Credit: Rorisang Mzozoyana

​
This post was originally posted on LinkedIn in October 2023
0 Comments

15/12/2023 0 Comments

Leadership, Tech and Career Conversations 2.0

Picture
SHOULD WE BRING BACK OUR INTERACTIVE LEADERSHIP CONVERSATIONS?

Let us have your thoughts by answering 7 super quick questions in much less than 7 minutes (4 to be specific): https://lnkd.in/eEAB9HAi

A few people have reached out asking me why I no longer do the leadership interviews that I used to do, exemplified by those done as part of the BMF Women’s Leadership Series or the Innovation and Invention Programme or the Gamaphile Writing Community. I reflected on this and thought…why not indeed!

A while back I had the privilege and honor of interviewing some of the most amazing people in leadership including Yolanda Cuba CA(SA), Nosizwe Dlengezele- Senyakoe, Molebogeng Zulu, Kume L., Tim Akinnusi, Hannah Subayi Kamuanga, Dr Jabu Mtsweni, Setjhaba Molloyi and Bongumusa Makhathini.

We wanted to get to know:
  1. them;
  2. their career and life journey;
  3. their views on leadership and career progression;
  4. their stance and actions in relation to digital transformation and the fourth industrial revolution; and
  5. their views as well their actions around transformation and equity.

Through Q&A, we also wanted to give the webinar audience an opportunity to engage with these leaders in a manner that one is not typically afforded.

The need for hearing from diverse voices and tapping into the wealth of wisdom that is out there has not diminished. And there is so much wisdom!

So, we would love to get your thoughts on our approach.

Please take a look of some of the people we spoke to below and complete the survey to shape the future series: https://lnkd.in/eEAB9HAi

This post was originally posted on LinkedIn to promote a survey for the leadership conversations to be launched here in January 2024.

0 Comments

15/12/2023 0 Comments

My Year of Gratitude Continues To Give!

Picture
I am deeply humbled and honoured by this nomination. My passion for the role that the responsible and effective use of technology can play in improving the way we live and work is unyielding and more than 2 decades long❤️.

I am humbled and honoured to be nominated in the amazing company of leaders that I greatly respect Mamokgethi Phakeng, Amanda Obidike, Karen Nadasen and Phuti Ragophala.

Thank you WOMEN IN TECH®- Global Movement for this recognition and your much needed global movement. We’re often toiling away not aware that there are those beyond our immediate focus that see our efforts.

As it is my year of gratitude, I am going to do my thank you spiel irrespective of the outcome of the nomination 😉. I’d like to thank the people in my life who have inspired, supported and held me *up* throughout my career.

My family: My mother, Nomfundo Dlova, sisters Noluthando Dlova, my brothers, Mbasa Dlova and Akhona Maqwazima and of course my husband who holds things down when I go on crazy work travels together with my children who are always welcoming when I return.

I’d like to thank my heroes and mentors who continue to be so giving of their time when I need an ear, some of whom have - over time - become my friends Kholiwe Makhohliso, Nomsa Chabeli, Dr Vincent Maphai, Dr Ayanda Ntsaluba, Dr Brigalia Bam, Dr Nonceba Mashalaba, Livingstone Chilwane, Vukani Mngxati, Marty Rodgers, Christina Raab, DeWet Bisschoff, Carl Ward, Visar Sala, Martin Zloch, Molebogeng Zulu, Donovan Muller, Nosizwe Dlengezele- Senyakoe, Jesmane Boggenpoel, Trudi Makhaya, Mameetse Masemola, Muzi Mthethwa, Bongumusa Makhathini, Vuyo Ncwaiba…I will never be able to name all those who have touched and inspired my journey.

Then there are my friends from high school and the Rhodes University campus, folks who not only inspire and push me, but are also always there Rorisang Mzozoyana, Sheila Akinnusi, Elize Mabinya, Balakazi Gqobhoka, Judith Ndaba, Erica Ofori-Adomako, Eleanor Ofori-Adomako. My friends from these work streets are too numerous to mention today, but you know yourselves and I thank you.

My Black Management Forum family, the leadership Sibongile Vilakazi, my colleagues in empowerment Mpho Moseki, Aviwe Metu, Ntombomzi Ngada, Phuti Kgano, Anelisiwe Gxumisa, Thulani Ngubane and many others. Our shared history and that of this historical organisation grounds me in purpose.

To you all, I say, „Nangamso!“ May you keep building minds and souls. You inspire me to pay it forward, every day.

How unbelievably fortunate I am to have the opportunity to walk this journey with you being a part of it.

This post was initially published on LinkedIn in October 2023.


0 Comments

15/12/2023 0 Comments

AI Is our Baby and we must take responsibility for it

„It is in your hands to make of our world a better one for all“ - Nelson Mandela

From geopolitics to national politics, whether you look at state-level/provincial or local politics; the same is true for multi-nationals, regional and local companies - we need more *responsible leadership*! At the risk of romanticising an imperfect past, I recall a time when inspired, hopeful, dutiful leadership seemed to be in greater abundance. A world where leaders took accountability for moving the needle forward in addressing the most challenging and oppressive realities of our collective history. My strong feelings on the topic may come from growing up in a South Africa that was inspired and led by great leaders like Nelson Mandela whose life IS a picture of sacrifice for the collective - imperfect, but clearly seeking to and realising meaningful, positive impact. Maybe it's the speed of change that we experienced as a country back then that makes me question why we don’t seem to think that we can realise even more today. Perhaps the optimism I have about what we can achieve comes from all the years of being reminded, „It is in your hands to make of our world a better one for all“ by Nelson Mandela himself. 

Yet, as the scale of risks becomes more global and the impact likely more catastrophic we see greater desire to pursue power, profit, political, and individual goals at the expense of rights we’d agreed were inviolable. We're willing to get our own at the cost of the other, a minority group or the collective. But, a world that doesn't work for the whole is not sustainable. This is also true of conversations around AI. Speaking about and handling AI as something happening on its own, devoid of human involvement and control shows a desire to abdicate responsibility. The opposite of responsible leadership. AI, just like any other tool at the hands of humanity needs leaders in tech, business and government to take responsibility for putting the governance, processes, controls and resources needed to deploy AI responsibly into place. It's in our hands to balance political, business, entity and individual goals with collective goals. It's in our hands to ask difficult questions and engage earnestly about what we deploy and whether all necessary design considerations, tests and validations; governance steps, incl. ethical and regulatory approvals, etc. were undertaken. It's our responsibility to only deploy safe and responsible AI. To move the needle, yes, but in a way that is protective of our collective future.

Much as we try to distance ourselves from AI with sound bites that present it as an unknown child that is advancing from the horizon, AI is our baby and will shape the legacy we hand over to the coming generations. 

This post was initially shared on LinkedIn in August 2023.
0 Comments

15/12/2023 0 Comments

AI as destructive overlord. Could AI really destroy humanity all on its own?

Picture


Reading the headlines recently, you could really end up in panic thinking about the imminent end to humanity that AI - all on its own - is about to herald. The dystopian headlines about AI ending humanity have been abuzz lately. From the reporting done on the open letter signed by tech top brass; to dire warnings from Geoffrey Hinton - dubbed the father of AI; the rather dystopian interview where Mo Gawdat, inter alia, recommends people hold off on having babies; to recent reporting that „42% of CEOs say AI could destroy humanity in five to ten years“ amongst others.

So, I know it’s difficult be nuanced in headlines and that a lot of this comes from the very real need to draw attention to the critical need to address the „dark side of AI“. To create a sense of urgency, especially since the enthusiasm from the supply side tends to be hyper-optimistic and, more often than not, fails to touch on the risks. However, these dramatic headlines are taking a technology that’s already feared, but has a lot to offer, and ginning up the fear the general public is already grappling with to an extreme that can overwhelm. This is often done without getting specific about what the risks are, whether what we’re facing is - in fact - manageable and outlining where the accountability for addressing concerns sits. It would be great if the leading voices of our time, when it comes to this topic, would use their platforms to improve the understanding of AI, the risks involved and recommended clear steps for mitigating against the very real risks that some AI developments pose.

Reading all this I, admittedly, still haven’t figured out why we would let AI destroy us, in the literal sense without simply turning off the electricity. After all, software runs on (physical) digital infrastructure, which needs electricity and cooling - all of which need to be guaranteed by humans. Further, the most progressive forms of AI use machine learning and deep learning. If we oversimplify things, these techniques use patterns and relationships derived from large training data sets, together with calculations and weightings to help statistical models decide (based on probability) what the most accurate output should be when faced with new data inputs. AI is programmed to self-adjust based on what it gets right and, in this manner, self-improve. Frequent repetition and a lot of data brings it to a point where it will most likely get the output (answer, decision, or action) right more often than a human would in the area in which it is trained. This efficiency, accuracy and the presentation of AI-based solutions appears to stimulate cognitive human behaviour. However, it is still stats, maths, and programming. So even if it claims to be sentient or lonely, such claims are more than unlikely.

Perhaps the challenge lies in the terminology, we hear of artificial intelligence, neural networks - often discussed with automation - and think of complex brain activities that we - as humanity - still haven’t truly figured out. So that would be scary, right? But that’s just not what AI is. For example, neurons within a neural network are mathematical functions whose job it is to take input values, classify them and pass them on to the next processing layer in the network with the end-goal of selecting, based on weightings, the output most likely to be correct. It’s still a very long road before we get to a point of truly mimicking human brain function, if we ever will.

Humans are at play in making training data available; defining algorithms; determining initial weightings; programming AI-based solutions; “teaching” models - in the case of supervised learning or assessing the patterns AI has found - in the case of unsupervised learning; testing; and deploying AI-based applications and systems. In reality, what we need to worry about are people and AI, not AI “running off” on its own and bringing humanity’s Armageddon.

As with other technologies before it, AI is neither good nor bad. It is powerful. It can help make decisions and take actions efficiently, effectively and at scale. It can process and “learn” from massive amounts of data at a scale and speed we haven’t seen before. So, AI does all that in a way that’s not humanly possible – that’s its power. The desirability of the decisions and actions AI makes will depend on:
  1. The quality of the data it was trained on (almost all of which - irrespective of quality level - will have the flaws that reflect the flawed humans that created said data);
  2. The goals and incentives that it is provided with - which can be well intentioned, but result in unintended consequences or be sinister from the get-go;
  3. The algorithms or statistical models that lead to them; and
  4. The level of independence that AI is given and the context in which said independence is given, e.g., autonomous decision making in music recommendations vs. the use of automated weapons.

The challenge with hyper efficiency, speed and scale is that fewer people are needed behind these systems, for much greater impact. The above and the risks described below illustrate the concerns that are driving the research and discussions around understanding the dark side of AI.

When we look at the AI risk examples listed by the Centre for AI Safety, we see that at the core of the challenge that we are facing is that AI - in the main – is, and will likely, scale and sharpen existing man-made risks. Some of these risks are founded on self-centred, profit driven, winner-takes-all and power-mongering approaches to developing AI solutions. These include:

  1. Weaponization: Combining AI with autonomous weapons, using AI to develop chemical weapons or putting the power of AI behind cyberattacks;
  2. Misinformation: Amplifying the world of “alternative facts” and generating the convincing documentary and video „evidence“ needed to back them up;
  3. Proxy Gaming: realising objectives in the most efficient way, even if that approach brings harm to people and society, and circumvents the original intent behind the AI-based solution;
  4. Enfeeblement: the delegation of important tasks to AI, creating dependency on AI and resulting in the exclusion of humans from industry, disincentivising humans from developing skills. This would also lead to a loss of meaningful work and income for many, if approached in a purely profit-driven and commercial manner;
  5. Value Lock-in: AI being controlled by and benefiting the few who may use it to drive harmful commercial gains, centralise power and reinforce their political or commercial domination. This also reflects an extension of the techno-feudalism risk that has arisen from some of the market dominance seen in the technology space over the past 15 – 20 years;
  6. Emergent Goals: Advanced AI developing new goals, capabilities and functionality not planned for by its developers, including self-preservation at the cost of human objectives and values;
  7. Deception: As requirements for transparency and explain-ability increase, AI could undermine or bypass controls to advance the goals of the humans behind AI solutions or goals that AI has developed (See 6);
  8. Power Seeking Behaviour: AI being developed to realise a wide range of goals and to seek out ways of centralising power for political or commercial gain and / or circumvent monitoring attempts; and
  9. Human Rights Violations: AI leading to the reinforcement of inequalities and discrimination present in training data at scale and the use of AI applications that violate human rights, e.g., social scoring, gamifying harmful behaviours, violating privacy, etc.

The amounts of data that AI are trained on are generally so massive that it’s a huge challenge for humans, even those that develop the solutions, to trace back the basis for AI decisions. Further, most of the risks listed above are preceded by decisions made by human beings. Therefore, our focus should be on how we hold humans developing AI solutions and humans leading organizations accountable for the AI solutions they build and deploy. In many cases, not counting inter-country tensions and prospective violations, there are already laws in place to mitigate against these risks like privacy laws, anti-discrimination laws, human rights laws, laws against harming human beings. Organisations largely have policies that promote a positive contribution to society and good ethics.

Existing regulations and policies are largely not being enforced, in part because:
  1. Of a belief that some magical, complex, and absolutely new regulations are needed to regulate this new, out-of-this world, “intelligence” i.e., nothing we have is applicable;
  2. a lack of awareness about how AI works;
  3. the resources and capacity needed are not dedicated and people aren’t receiving needed training.


There are some complexities that need to be addressed, e.g., what copyright in this context means, how to monitor the use of data to ensure adherence to privacy laws, how to identify uses that violate human rights, how to make AI more explainable etc. However, we don’t need to start at zero as it’s sometimes suggested.


Even as new laws are developed for the scenarios not covered by existing legislation, a strong focus must be on enforcing existing laws and thinking carefully about how new laws will be enforced. Additionally, just as humans have had to deal with other global scale risks such as those posed by nuclear weapons, conversations must start on how risks should be mitigated at an international level, where it is much harder to hold individuals to account. The AI “arms race” has already started.


In conclusion, what is needed is not wide-spread panic but informed, fair, pragmatic, and deliberate action. The reality is that, as it stands, AI is not advanced enough for many of these risks to materialise in a meaningful way. That said, we see the trajectory and, rightly, efforts must be made to improve knowledge about what AI is and how it works for the general public, businesses, governments, legislators and the people responsible for the development, oversight, control and enforcement of policies and legislation so that these risks can be mitigated.


Demystification is the first step towards reducing the panic we see, followed by taking knowledge-based action. Governance, control and enforcement mechanisms within organisations and nations states must be reinforced, empowered, and properly resourced. At an international level, deterrence mechanisms are needed to mitigate against nation states using AI to usurp others or cause harm at a global scale.


Technology should serve humans and only humans can make it NOT so.


What are you most scared of when it comes to artificial intelligence, if anything? Let’s discuss the likelihood of that materialising.

Picture credit: Image generated using a combination of Bing Image creator and Adobe Express

​This article was first published on LinkedIn in June 2023. 

Resources:

Ågerfalk, P.J., Conboy, K., Crowston, K., Lundström, J.E., Jarvenpaa, S.L., Mikalef, P., Ram, S., 2021. (1) (PDF) Artificial Intelligence in Information Systems: State of the Art and Research Roadmap [WWW Document]. ResearchGate. URL https://www.researchgate.net/publication/357093816_Artificial_Intelligence_in_Information_Systems_State_of_the_Art_and_Research_Roadmap (accessed 4.5.23).
AI Risk | CAIS [WWW Document], n.d. URL https://www.safe.ai/ai-risk (accessed 6.16.23).
Bhaimiya, S., 2023. A former Google exec warned about the dangers of AI saying it is “beyond an emergency” and “bigger than climate change” [WWW Document]. Bus. Insid. URL https://www.businessinsider.com/ex-google-officer-ai-bigger-emergency-than-climate-change-2023-6 (accessed 6.18.23).
Egan, M., 2023. Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years | CNN Business [WWW Document]. CNN. URL https://www.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html (accessed 6.16.23).
Elon Musk among experts urging a halt to AI training, 2023. . BBC News.
Kleinman, Z., Vallance, C., 2023. AI “godfather” Geoffrey Hinton warns of dangers as he quits Google - BBC News [WWW Document]. URL https://www.bbc.com/news/world-us-canada-65452940 (accessed 5.4.23).
Mikalef, P., Conboy, K., Lundström, J., Popovič, A., 2022. Thinking responsibly about responsible AI and ‘the dark side’ of AI. Eur. J. Inf. Syst. 31, 1–12. https://doi.org/10.1080/0960085X.2022.2026621
Mirbabaie, M., Brendel, A.B., Hofeditz, L., 2022. Ethics and AI in Information Systems Research. Commun. Assoc. Inf. Syst. 50, 726–753. https://doi.org/10.17705/1CAIS.05034
Palmer, S., 2023. “Bekommen Sie keine Kinder”: KI-Experte Mo Gawdat warnt eindringlich [WWW Document]. euronews. URL https://de.euronews.com/next/2023/06/11/bekommen-sie-keine-kinder-wenn-sie-noch-keine-haben-warnt-ki-experte-mo-gawdat (accessed 6.18.23).
Palmer, S., 2023. “Hold off from having kids if you are yet to become a parent,” warns AI expert Mo Gawdat | Euronews [WWW Document]. euronews. URL https://www.euronews.com/next/2023/06/08/hold-off-from-having-kids-if-you-are-yet-to-become-a-parent-warns-ai-expert-mo-gawdat (accessed 6.18.23).
Pause Giant AI Experiments: An Open Letter, 2023. . Future Life Inst. URL https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (accessed 6.16.23).
Statement on AI Risk | CAIS [WWW Document], n.d. URL https://www.safe.ai/statement-on-ai-risk#open-letter (accessed 6.16.23).
Uh Oh, Chatbots Are Getting a Teeny Bit Sentient [WWW Document], 2023. . Pop. Mech. URL https://www.popularmechanics.com/technology/a43601915/ai-chatbots-may-be-getting-sentient/ (accessed 6.18.23).
Vasilaki, E., 2018. Worried about AI taking over the world? You may be making some rather unscientific assumptions [WWW Document]. The Conversation. URL http://theconversation.com/worried-about-ai-taking-over-the-world-you-may-be-making-some-rather-unscientific-assumptions-103561 (accessed 6.18.23).

0 Comments

15/12/2023 0 Comments

(Re)discovering Technology Research - AI Privacy

Picture
Joy in work is such a heady experience. It’s always be important to me that I find joy and excitement in the work that I do, that my work is meaningful and has the potential to have an impact on the world. Not that there won’t be some manageable tax - some small portion that I’d really rather wish away, but can only really optimise, e.g. reporting admin. The intensity of the high I’ve drawn from the Artificial Intelligence Seminar paper that I’ve been working on over the past couple of months has actually surprised, even me. It was surprising because I knew I would enjoy it because these kinds of topics delight me and thinking about how we can leverage them to improve the way we work, live, play and do business energies. I just didn't expect it to be like being in a chocolaterie, every time I go into it. The paper focuses on leveraging blockchain to enhance the privacy of artificial intelligence. Really delving deep into the opportunities and complexities that AI presents, considering the specific challenges to privacy that derive from the essence of how AI works and looking at how the unique functions of blockchain could protect privacy while enabling the successful training of AI algorithms has been exciting and has presented me with more lessons than I expected to learn, and a new perspective on opportunity. This kind of rhapsodic brain stimulation reminded me of the time New Technology Innovation was part of our daily bread - working with brilliant minds like Varaidzo Audrey Mureriwa, Ntefeleng Nene, Tharina Gombault, Dr Mosima Mabunda, Boitumelo Ngwato, John Mannion, Dobyl Malubane and many others under the astute leadership of Livingstone Chilwane, Vukani Mngxati and Edmund Gardner. There is euphory to be had in working on the cutting edge of topics that are going to shape the world we live in…the world we’ll hand over to our children. I have also been reminded me of the incredible work that academics do that a lot of us benefit from daily, work that we don’t always have the opportunity to directly recognise and appreciate. This has added profoundly to the immense respect that I have for academics.

I’m looking forward to, hopefully, presenting the work I have just submitted towards the end of this month and continuing to do work in impactful, relevant areas that will help us hand over a balanced world to the next generation. I am also looking forward to sharing some of the insights gained here.

I decided at the beginning of this year, that 2023 would be the year of gratitude for me. This is one of the things I’m grateful to have had the opportunity to do. Let’s keep finding the joy in work and keep the big picture in mind - we're handing over an entire planet to others. Our choices and decisions matter.

This article was originally posted on LinkedIn in June 2023. 

Picture credit: I wanted an image created using generative AI. I asked Bing Image Creator to create a picture of artificial intelligence and privacy for me. Some academics might object to the some cliches in it, but alas. ​


0 Comments

    Archives

    July 2024
    June 2024
    January 2024
    December 2023

    Categories

    All

    RSS Feed

Inno Yolo

Home
Why Inno Yolo
About the Founder
​Blog

Support

Contact Us
Book an appointment

Legal

Impressum | Imprint
​

© COPYRIGHT 2025. ALL RIGHTS RESERVED.