Image generated using Bing CoPilot Designer On 10 June 2024, Apple announced that it is bringing Apple Intelligence directly to our apple devices. Apple Intelligence is tipped to leverage generative models to bring a smart, context aware personal assistant to every apple device. Now, you might be asking at this point - isn’t that a description of Siri? If you use Siri however, you’ll know that the difference between Siri and Apple Intelligence is probably the difference between a letter of intent in the business world and an actual binding contract (maybe that’s an expression of hope on my part on the latter). Siri has trailed behind compared to other voice assistants like Alexa and the Google Assistant. An attempt to address the shortcomings of Siri, might actually be the exact reason why the announcement of Apple Intelligence has ruffled some feathers.
Apple has in the past stood as a stellar example that you can respect users’ privacy and at the same time be one of the most successful companies in the world (assessed using a commercial lens). The pressure on Apple has been intense for some time though, with the most recent example seen when Apple lost its spot as the world’s second most valuable company to Nvidia when Nividia added US $150 billion to its market cap. Nvidia’s gains in the past year have been attributed to the explosion in demand for its artificial intelligence chips resulting from the generative AI boom. Indeed, you’ll find that there’s hardly anywhere you “go” in the digital universe without being offered some new generative AI feature or getting exposed to the products on those tools (case in point is the above image). The market has been eagerly waiting on Apple to respond, perhaps more eagerly than Apple users (at least in certain quarters). Indeed, on 13 June, 3 days after the Apple Intelligence launch, Apple rose past Nvidia and displaced Microsoft as the most valuable company in the world. The battle of the tech titans continued throughout June with these three companies trading places – most recently starting with the most valuable companies, we have Microsoft, Apple, Nvidia, Google and Amazon occupying the top 5 spots. So why has Apple’s launch of Apple Intelligence been so controversial and what do data privacy together with AI Ethics have to do with it? Most of Apple’s products are privacy respecting because Apple deploys their machine learning and generative artificial intelligence models locally on devices using a federated learning approach that is combined with trained model sharing to allow for user data to stay under the control of users, rather than transfer user data to their servers for on-server processing as is commonplace with other technology providers. Brrrr…say what now? Ok, let’s take a step back… If you’re a smart phone user, you probably take photos regularly. Irrespective of which smart phone you use, you regularly get specially curated videos showing you special memories and moments, e.g. from a recent holiday or other special moments like family dinners, complete with background music that speaks to that video. These videos are compiled by artificial intelligence models that are trained to:
*assuming you haven't provided this information in some form or another. With other tech giants, the artificial intelligence model is deployed and managed centrally on servers owned and operated by that company. As a result, for your pictures to be used by the model on those servers, your data must be transferred onto those servers for centralised processing. What’s made Apple different so far is that instead of bringing your data to a central point, they’ve deployed their model onto your device thus removing the need for them to transfer your data to centralized servers for processing. This eliminates the data privacy and security risks that arise from moving and storing user data on their serves. So, they’ve been able to provide you with the same special moments videos while respecting your privacy and leaving your data less susceptible to breach. How they do this is as follows (we’ll get detailed and then lift it up again later): Apple designs, trains (centrally) and then deploys machine learning and generative models (read artificial intelligence models) onto your device (locally) where they leverage your data on-device to generate the AI model outputs you see, e.g. the family moments videos created and presented to you in the gallery app. This happens without Apple moving your data from your device elsewhere for processing, i.e. local processing or on-device processing. Key here is that your data remains on your device and therefore stays private. Apple doesn’t try to take ownership of it, use it to create a profile of you for marketing purposes, they also do not store it for later use. The locally deployed AI models use the outputs of the models to learn and improve. Model improvement values are generated. Only the improvement values are sent back to Apple so that the improvement values emanating from various devices can be aggregated and used to improve the applicable AI model centrally. The improved AI model is then redeployed post improvement. This represents a continuous deployment, learning and improvement cycle that allows AI models to learn and improve while user data is protected. Another reason why Apple has been a beacon of hope around respecting user privacy relates to the many different functions and features focused on ensuring that you can choose to enhance your privacy and mitigate against getting tracked by websites throughout the web. This ranges from the usual cookie blocking functionality to Apple’s private relay. Just yesterday, I was informed that an app from certain company with a firm position in the world Top 10 most valuable companies was trying to collect data from other apps and I was asked to decide if I wanted to permit this, which I didn’t and still don’t. Apple users have come to trust that, although Apple may be imperfect, the company strives for a world where users are not the product and privacy is a given and not something we as users have to buy back. This is not the case with many technology giants. In fact, had it not been for legislation like GDPR, the Digital Service Act and now the EU AI Act, we’d probably be ill-informed about the number of vendors our data and behavioural patterns are sent to when we decide to use a website and accept their cookies. The same goes for many apps. Back to Apple Intelligence. What’s got people irate is that a company that has built this level of trust and credibility around privacy and ethics in its handling of user data is now joining the AI rush, which involves using people’s work and data without their express permission and without engaging the topic of compensation. To be fair, Apple has taken pains to highlight that they aim to respect data privacy regulations and that they’re still aligning to the principles they’re known for. However, it’s worth highlighting that:
Alas, at the end of the day, Apple Intelligence has been launched and has received a positive market response. The horse has bolted. Apple has also tried to walk the tight rope between commercial and ethical interests. While in some areas there are real concerns, in other areas they have sound technical plans for continuing to protect their users. Time will tell just how effective said plans will be, but they've go much further than any other technology giant has. That said, these developments are definitely something to watch to see if Apple will be able to scale its artificial intelligence offerings without being seduced by the commercial gains experienced by competitors it once stood on the bastion of privacy to compete against. After all, for some of us, the security of our data and the commitment to data privacy has been why we've stayed within the Apple universe despite Siri's not so fabulous answers. Organisations need to respond to market forces, but they need to take care that they don't lose track of what the customers who pay for their products and drive actual return to shareholders expect from them.
0 Comments
19/6/2024 0 Comments 6 Tips for Detecting Deepfakes and Mitigating against AI-Powered DisinformationIn a year where 2 billion people will have voted, it's never been more important to mitigate against being influenced through disinformation campaigns Over the past few years we’ve heard a lot about alternative facts. It’s been a while since we lived in a world where we drew our news from the same 5 newspapers and 4 TV channels (2 of which were only available for 12 hours a day and one was a premium subscriber only channel). We now live in a world where entertainment channels can call themselves news channels and get away with it. News channels are plentiful, run 24/7 and some have an openly declared agenda. Some "news sources" don't consider themselves accountable to neither industry bodies nor the public. Our world is divided. We can’t agree on what a reliable source of information to support our views and standpoint is so we regularly end up in a stand-off, distrusting each other's sources. Ours is a world where we sometimes mock what some would reasonably refer to as credible scientific sources and fact checkers. Academics and scientists are in some quarters considered part of a great number of large conspiracies relating to some of the most important topics of our time and every erroneous study or influenced scientist is waved around as evidence that the "whole lot can't be trusted". Our current context is one where harmful deepfakes can thrive. What’s a deep fake? When social media launched, we celebrated the democratization of information and especially that of news media. Anyone could now be a journalist and report news live. A decade to a decade and a half later, we’re getting our news from a myriad of sources and channels (including individuals in social media groups) and we're being fed news in line with our previous clicks and are solidly anchored in our own bubbles. The more we show interest in certain topics or discussions, the deeper we get pulled into the bubble because, "This might interest you". That would probably be bad enough, but we’re also being bombarded with disinformation and dare I say it, the real „fake news“. The above information bubble trend has only been made more challenging by massive improvements in the technologies that allow the creation of deepfakes videos, images and voice recordings. We already know that every technology is a double edged sword - wielded for good or bad depending whose hands it lands on. Why are deep fakes so effective? This means that deepfakes are here to stay. So it’s imperative that we’re equipped with tactics for identifying deepfakes and protecting ourselves against disinformation. The question may arise: why is anybody working on technology that can be used to manipulate and deceive people? How did we get here and why don't we just legislate all deepfakes away? Deepfake video generation or AI generated content allows for the generation of media content using AI at a monetary cost that is marginal when compared to what it would cost to produce the same media using traditional means. Additionally, AI generated content can be created at speeds and at a scale that simply cannot be achieved using traditional media creation methods. There are also examples of deepfakes or AI generated content being used for good. It's important to make clear that there are critical ethical discussions that we need to have around the implications of these efficiencies for humanity and how commercial interest should be treated in cases where there is plenty of room for abuse, harm or the massive displacement of human labour. Why deepfakes? What we do know, however, is that bad actors are leveraging this technology to deceive and manipulate with the most common examples relating to politics, social engineering schemes and financial fraud. These actors do not concern themselves with responsibility and accountability in their use of technology. They also do not prioritize humanity or sustainability. Irrespective of the good work and good intent behind initiatives such as the content authenticity initiative, bad actors don't feel bound by the same rules that you and I might feel bound by and they don't care to build a healthy and well functioning society and planent in the way that you and I might. So, it is imperative that we arm ourselves against deepfakes and other forms of disinformation, because AI generated content may still espouse room for improvement, but it's effective and here to stay.
Detecting Deepfakes: Fake videos and images have been an issue for some time, with applications like photoshop making it possible to create images showing people in locations that they have never been to and doing thing’s they’ve never done with people they’ve never met. You might recall the scandal earlier this year relating to the member of a famous family who had released a photo that never was to the media. Much more difficult to fake without detection in the past have been videos and voice recordings. However, the quality of both deepfakes videos and fake voice recordings has improved dramatically. We’ve seen the impact of deepfakes in the political arena. Late last year a deep fake video of Olaf Scholz, the current German Chancellor (German head of government) caused waves. While the stance taken by fake Olaf Scholz in speaking out against the striking rise of right-wing extremism in Germany, the video wasn’t real and some of the statements made by fake Olaf Scholz may be constitutionally questionable. Another deepfake caused waves around the Unites States primary elections last year. In that case, a voice eerily mirroring that of Joe Biden, the current president of the Unites States, was used to robocall thousands of voters that were more likely to vote for his party and discourage them from doing so. So deepfakes are not only coming at us through our social media channels, where the companies behind the most successful platforms seem to avoid the responsible path, deepfakes will come at us via every single channel we use to consume media and information. So, when politicians - as is currently the case following the South African elections, promise to provide us with evidence of vote rigging - in the form of videos and recordings - we need to learn to take a beat (pause), take a step back and do what we can to verify the veracity of the materials presented to us. Detecting Deepfakes generated with the help of AI is hard. However, the reality of life long learning is that it is life long, so we need to keep on learning. That said, there are steps you can do to minimize the risk of being taken in:
There’s a lot of technology work focused on detecting deepfakes. While we’re not where we want to be yet, tools such as those described in the Techopedia article referenced below can be very useful. We can also be hopeful when we look at the amount of collaboration seen from many different companies and organisations that have joined initiatives focused on addressing the issues around deep fakes, e.g. the content authenticity initiative as well as the coalition for content provenance and authenticity. However, we need to train ourselves - using the above tips - to more frequently stop and verify the content and media presented to us. It is harder than consuming media passively, but we need to increase our current dosage of healthy skepticism because the sustainability of our world literally depends on it. Every news media house or organisation that deals with information, especially those trusted by the public as a source of credible and verified information, should have a team that combines their expertise on deepfakes and the available technology set a razor-sharp focus on ensuring that they mitigate against the dark side of deep fakes or AI-generated content. Resources: 2024 is a record year for elections. Here’s what you need to know: https://www.weforum.org/agenda/2023/12/2024-elections-around-world/ The Content Authenticity Initiative: https://contentauthenticity.org/ The Coalition for Content Provenance and Authenticity: https://c2pa.org Deepfakes: Ist das echt? https://www.bundesregierung.de/breg-de/schwerpunkte/umgang-mit-desinformation/deep-fakes-1876736 Deepfake-Scholz verkündet AfD-Verbot: https://www.zdf.de/nachrichten/politik/aktion-gefaengnis-afd-verbot-100.html Scholz-Deepfake: Sind KI-Fälschungen verboten? https://www.br.de/nachrichten/netzwelt/scholz-deepfake-sind-ki-faelschungen-verboten,TwzZ6nE Meta Oversight Board Warns of ‘Incoherent’ Rules After Fake Biden Video: https://time.com/6686574/meta-oversight-board-biden-video/ The Biden Deepfake Robocall Is Only the Beginning: www.wired.com/story/biden-robocall-deepfake-danger/ Deepfakes are being used for good – here’s how: https://theconversation.com/deepfakes-are-being-used-for-good-heres-how-193170 'Deepfake is the future of content creation‘: https://www.bbc.com/news/business-56278411 Synthesis AI Video Generator: https://www.synthesia.io 7 Best AI Deepfake Detector Tools For 2024: https://www.techopedia.com/best-ai-deepfake-detectors 30/1/2024 0 Comments 10 Career Strengthening Insights from Yolanda Cuba, Group Regional Vice President at MTNThis Thursday, 01 February 2024, we'll be kicking off the Inno Yolo Leadership, Tech and Career Conversations Series 2.0. It's inspired by a leadership series I helped conceptualise and facilitate while a leader at the Black Management Forum in Gauteng. Credit to Mpho Moseki, then BMF Gauteng Chairperson, for giving that series immediate backing as well as Anelisiwe Gxumisa, Phuti Kgano, Thulani Ngubane and Nhlanhla Simelane for being the force behind its improvement and execution. To honor the previous conversation series, here are some career strengthening insights from a conversation we had in August 2020 with Yolanda Cuba, current Group Regional Vice President at MTN (the largest mobile network operator in Africa, and one of the largest in the world ). At the time, she held the role of Group Chief Digital and Fintech Officer at MTN. They have and will continue to stand the test of time. Having experience in both, I can assure you that these recommendations apply to both entrepreneurs and career professionals. Career Fortifying Insight 1: Your environment matters. Yolanda credits her trailblazing career path, where she was made Deputy CEO in her mid-twenties to more than her competence and abilities. She believes it resulted from being surrounded by people who believed in her, supported her and trusted her enough to take chances on her. So, if you’ve got big ambitions then it’s worthwhile checking your environment to see if you have the support that you need to realise your goals, or at least make an attempt of it. If not, take the necessary steps to see if you can get that support. If there’s no appetite to provide you with an enabling environment, you may need to reshape your medium-term plans and rethink the company you keep. Career Fortifying Insight 2: The importance of understanding your value and who you are when you choose a job is under-rated. Yolanda got a dose of reality at her first job, where she was told that they’d strategise everyday. Reality check: „You won’t strategise everyday“, but you should strategise enough, if you consider yourself a strategist. So make sure you ask this question when engaging about a new job: „What am I going to do everyday?“ Otherwise, you’ll be sold the highlights, but your reality on a daily basis could be much (too) different. I couldn’t agree with this piece of advice more! Career Fortifying Insight 3: So you’re not “strategising” (replace with whatever you’re passionate about) enough? Then take action starting now. When you do not like the game you are in, change it (but please - no sudden moves). It is incumbent on you to make the changes that will lead you to a sustainable and successful career. Making that change may require sacrifice. Yolanda gave it a year in the new place and then went back to do her Bachelor of Commerce Honours in Accounting. She planned ahead, applying for a place at a university way before the year was over, because she knew that job “wasn’t it”. She gave up a decent starting salary at her first job, her car and her flat (A flat is an apartment, for those who speak another form of English;-)). She moved back in with her parents back home - giving up the independence she’d enjoyed at her flat. Her recommendation is: we need to be ready to walk a few steps back so that we propel ourselves effectively. Maybe your self-empowering move is not as dramatic, but it’s worth a think - what could you start doing today to align more strongly to your goals, ambitions and / or passions? Career Fortifying Insight 4: Your success is a by-product of how much you apply yourself. Too many people want to start with relationships before mastering competence. You won’t be noticed if you aren’t delivering on the job you’re given exceptionally. Few people will have your back, if they have to sacrifice their reputation to do so. Competence comes before relationships. Prior to her Mvelaphanda roles following her articles, Yolanda volunteered her Finance Management Services to a number of organisations to build up her competence. Mvelapanda hired a „rookie“ on paper and got someone that could deliver exceptionally well. She was so good at her job, despite her level, that external clients gave feedback to the organisation’s leadership. She had never worked with Exco since there was another hierarchy level in between, but feedback from external clients, such as the CEO of large mining company calling Tokyo Sexwale directly to ask if he could employ Yolanda, opened the path to her becoming deputy CEO in her twenties. What investments do you need to make in your development to delight your clients and customers? Career Fortifying Insight 5: Mentors and Sponsors are central to building a successful career and helping you solidify your leadership skills no matter your seniority. What’s the difference? Mentors are people you seek out and ask for guidance. They are good at something you want to be good at or have achieved something that you want to achieve. They help guide you along your career and give you perspective. Seek out mentors who think differently and who have achieved milestones that speak to your desired career path. Sponsors find you along your journey and are excited to help accelerate you along your journey. They typically seek you out because they admire something about you, e.g. your tenacity, competence, resilience, discipline, etc. or you remind them of their younger ambitious selves (you never really know why). They speak highly of you in spaces that you don’t have access to and wonder about you years after you’ve been out of touch with them. Yolanda has a sponsor that contacted her years after they last spoke to ask her where she was with an ambition she had once shared with him. Career Fortifying Insight 6: Learn repeatable models of success. Anyone can be a shooting star, but you’ve got to get clear on what it is that’s going to make you sustainable. In the early 2000s, a mentor congratulated Yolanda on a milestone she had achieved, then asked her how she was planning to sustain her success. She didn’t have an immediate answer, but took the time to reflect. She decided that she wanted to learn repeatable models of success. So she wrote to a senior leader at MTN to let them know she wanted to join them because she believed she could realise her goal of learning repeatable models of success at the leading African mobile network operator. To solidify these lessons, she later contacted a leader at South African Breweries and asked for an opportunity to do the same thing there - learn repeatable models of success from people and organisations that have walked the path. Career Fortifying Insight 7: You have the power to actively shape your own career. So exercise your agency. Too many of us build our career in a reactive way. We wait for a job post to appear so we can apply or we wait to hear from a head hunter. If you’re an entrepreneur, you might wait for an RFP. You have to be deliberate about identifying what you need for sustainable success and actively seeking out the development and career opportunities you need. How many of us can say they have proactively contacted people in the organisations that they hope to work in, or with, to express their interest irrespective of what’s listed on the careers page? Career Fortifying Insight 8: The ability to add value irrespective of where you find yourself requires adaptability. It’s imperative that we all take steps to actively increase our adaptability quotient. You’ve got to learn how to be adaptable - across industries, roles, geographies. Curiosity, the willingness to learn and willingness to adapt yourself to new environments are central to increasing your adaptability quotient. These practices will help you add value irrespective of where you find yourself along your career journey. Career Fortifying Insight 9: Cultivate a sense of curiosity. Curiosity is central to closing knowledge gaps when pivoting and for building adaptability. Yolanda joined the ABSA board at around 27 years of age. Walking into a meeting with colleagues that were +25 years more senior was a little intimidating. She was clear that should could not out-experience them nor could she outlearn them with their myriad of executive education and academic qualifications. So she had to identify her unique edge. She had already come to learn that people, irrespective of level, gravitate towards competence and so she invested in being super prepared for board meetings and sharpened the areas of strength she had brought with her, e.g. Financial Management Services. She probably worked through the full board packs more intensively than anyone else. She used experts in specific knowledge areas within ABSA and beyond and leveraged google as her best friends to support her learning in the areas that she did not possess prior knowledge. Soon her opinion on matters was actively sought. Her advice: build competence and be intellectually curious. Career Fortifying Insight 10: Give people hope and be solution centric. Whether you’re interacting with the people who work with you at home, members of your team or even leaders in your organisation, seek to be someone who helps people see more in themselves, identify more opportunity in the world and seek out solutions rather than being the cynic in the room. Yolanda encouraged and supported a domestic employee get a Bachelor of Technology Degree while working at her home. Can you imagine the world we would live in if each of us helped one other person improve their lot in life? "Agency is a rather susceptible phenomenon. You’ve got to believe you have it, to exercise it. It’s also quite elastic - the more you exercise it, the more it becomes second nature. So, yet again - as is so often the case - exercise is the answer!" - Avela Gronemeyer To stay updated on our events, join the newsletter.
![]() The EU finalised negotiations, reaching agreement between the EU Council and EU Parliament on its Artificial Intelligence Act this past Friday. This means the parliamentary vote in the new year becomes a process formality as these two legislative bodies are in charge of approving legislation. As could be predicted - some say it hasn't gone far enough, some feel it has gone too far, whereas others are holding judgement until the technical details are worked out in full. The devil will unsurprisingly be in the detail. Nonetheless, organisations can start their preparations. Regulation is coming and violation could be expensive. Summary of key agreements: Applications of AI deemed to threaten human rights and democracy are classified as unacceptable and will be banned, e.g. emotion recognition in the work place and in schools; biometric classification of people using sensitive, personal or discriminatory criteria; social scoring; manipulation of human behavior that seeks to override free will; exploiting the people's vulnerabilities; 'untargeted' harvesting of facial images to build facial recognition databases and biometric mass surveillance. Surveillance exceptions are in place for law enforcement and even that has been limited to certain, serious crimes. AI systems that could harm health; violate safety and fundamental rights; damage the environment and compromise the rule of law are classified as high risk and will be required to meet certain obligations, including undertaking impact assessments focused fundamental human rights, evaluating and mitigating against systemic risk, conducting adversarial testing (!!!) to minimise the risk of problematic or harmful outputs, e.g. sexist, racist, antisemitic, homophobic, classist or other discriminatory outputs as has been seen in the past (sometimes from organisations with the resources to do better). Further, citizens and consumers will have a right to lodge complaints and receive feedback on AI systems' impact on their rights. Regulation of base models won out over self-regulation i.e. the decision was between regulating only the application or use of AI vs. also regulation the the underlying AI models: companies building AI models will be required to ensure transparency by drawing up technical documentation; adhere to EU copyright law (the devil will be in the 'how' here); assess and mitigate against systemic risks, ensure transparency - at summary level - about the data used to train AI models, report serious incidents and implement robust cybersecurity measures. There is an attempt to work against the monopolisation of the AI market by larger tech companies in possession of deep stores of resources and larger market share through the provision, by national authorities, of infrastructure for small and medium sized businesses to develop and train their AI models and solutions before going to market (a tough goal to meet). Fines of up 7% of global turnover or €35m are on the cards. This post was originally posted on LinkedIn in December 2023. AI generated books are being used to scam book buyers and redirect earnings from the authors who have laboured over their craft to AI powered scammers exploiting loopholes on eCommerce platforms like Amazon.
While writing may be a labour of love, it is a skill that takes years to harness, requires authors to work hard, dig deep to give of themselves and, often, they must be vulnerable and face their fiercest demons and traumas to present us with books that we can fall in love with and learn from. They deserve to be fairly compensated and many have a hard enough time being able to live from their writing as it is. From biographies about authors and famous people that have no bearing on reality to brazen scammers that simply publish AI generated books using successful author names. Imagine, if you will, that while we’re waiting - with bated breath - for Chimamanda Ngozi Adichie or Margaret Atwood to release their next bestsellers, someone decides to help them along and take the proceeds (leaving us with an inferior paid for product, ruining the author’s reputation and creating difficulties for the distribution of the actual future novels). Amazon has since introduced a requirement for authors publishing on kindle to declare AI generated content. While more extensive verification methods may be useful here, it’s unclear whether they are on the cards. In another example of brazenly hijacking other people’s labour, a search engine optimisation (SEO) ‘expert’ proudly bragged - here on LinkedIn - about hijacking traffic from a competitor site by using AI to analyse the competitor site’s structure and content and to regenerate similar pages in the hundreds, if not thousands. Essentially, they „quickly“ recreated the original site and commandeered its traffic. Said post wasn’t about illustrating the concerning misuse of AI, it was a post advertising SEO services - a “look what we could do for you” business post. I’ve previously posted about how baffling it is that people seem to think that the use of AI absolves them from existing regulations and laws. These are extreme examples and - as I’ve previously posted - it’s not AI that we need to worry about - but rather the sinister amongst those building it, controlling it, selling it and using it. As consumers, we’ll increasingly be faced with products and services created / generated with and by artificial intelligence. Soon a book will be presented to you, let’s say its a novel. It will move you…perhaps even mirror some of your own life story back to you - cause you to burst out in laughter or shed cathartic tears. It will give you that warm fuzzy feeling that comes from consuming good art in the form of well created prose. The experience might even be transcendent: You - changed forever by an incredible book. You will feel ever-so-grateful to the author for sharing their story with you. Will it matter if that author isn’t human? This post was originally posted in December 2023. 15/12/2023 0 Comments Congratulations to Dr Mamokgethi Phakeng on winning the WOMEN IN TECH®- Global Movement Africa's Life Time Achievement Award.Congratulations to Dr Mamokgethi Phakeng on winning the WOMEN IN TECH®- Global Movement Africa's Life Time Achievement Award.
After being aware of her from the point that she took her role as UCT Vice Chancellor and having been a life-time member of the 3 am squad, I had the opportunity to facilitate a Leadership Conversation hosted by the Ekurhuleni Black Management Forum branch in April 2021. We had a meeting to prepare for the conversation and later had the interactive conversation with an engaged audience. Both were amazing interactions. The lasting impression that I got was of someone with incredible energy, an amazing commitment to building the nation through building others and a humility together with an openness that was surprising for someone who had achieved so much despite the obstacles that she had faced. She shared of herself and her wisdom with us, as her audience, so generously and so passionately that we left the conversation feeling nurtured. That evening with us was an extension of the work that she had been doing as "Deputy Mother" - building, enabling, supporting and fortifying. A such, I congratulate her on this amazing accolade. Dr Phakeng, South Africa's children have gained so much from all that you have given whether on campus, on social media or the various engagements at which you have given of your energy, experience and wisdom. Thank you. The awards were hosted by Women in Tech South Africa in Cape Town last night, who from all reports did an incredible job. Video Credit: Rorisang Mzozoyana This post was originally posted on LinkedIn in October 2023 ![]() SHOULD WE BRING BACK OUR INTERACTIVE LEADERSHIP CONVERSATIONS? Let us have your thoughts by answering 7 super quick questions in much less than 7 minutes (4 to be specific): https://lnkd.in/eEAB9HAi A few people have reached out asking me why I no longer do the leadership interviews that I used to do, exemplified by those done as part of the BMF Women’s Leadership Series or the Innovation and Invention Programme or the Gamaphile Writing Community. I reflected on this and thought…why not indeed! A while back I had the privilege and honor of interviewing some of the most amazing people in leadership including Yolanda Cuba CA(SA), Nosizwe Dlengezele- Senyakoe, Molebogeng Zulu, Kume L., Tim Akinnusi, Hannah Subayi Kamuanga, Dr Jabu Mtsweni, Setjhaba Molloyi and Bongumusa Makhathini. We wanted to get to know:
Through Q&A, we also wanted to give the webinar audience an opportunity to engage with these leaders in a manner that one is not typically afforded. The need for hearing from diverse voices and tapping into the wealth of wisdom that is out there has not diminished. And there is so much wisdom! So, we would love to get your thoughts on our approach. Please take a look of some of the people we spoke to below and complete the survey to shape the future series: https://lnkd.in/eEAB9HAi This post was originally posted on LinkedIn to promote a survey for the leadership conversations to be launched here in January 2024. ![]() I am deeply humbled and honoured by this nomination. My passion for the role that the responsible and effective use of technology can play in improving the way we live and work is unyielding and more than 2 decades long❤️. I am humbled and honoured to be nominated in the amazing company of leaders that I greatly respect Mamokgethi Phakeng, Amanda Obidike, Karen Nadasen and Phuti Ragophala. Thank you WOMEN IN TECH®- Global Movement for this recognition and your much needed global movement. We’re often toiling away not aware that there are those beyond our immediate focus that see our efforts. As it is my year of gratitude, I am going to do my thank you spiel irrespective of the outcome of the nomination 😉. I’d like to thank the people in my life who have inspired, supported and held me *up* throughout my career. My family: My mother, Nomfundo Dlova, sisters Noluthando Dlova, my brothers, Mbasa Dlova and Akhona Maqwazima and of course my husband who holds things down when I go on crazy work travels together with my children who are always welcoming when I return. I’d like to thank my heroes and mentors who continue to be so giving of their time when I need an ear, some of whom have - over time - become my friends Kholiwe Makhohliso, Nomsa Chabeli, Dr Vincent Maphai, Dr Ayanda Ntsaluba, Dr Brigalia Bam, Dr Nonceba Mashalaba, Livingstone Chilwane, Vukani Mngxati, Marty Rodgers, Christina Raab, DeWet Bisschoff, Carl Ward, Visar Sala, Martin Zloch, Molebogeng Zulu, Donovan Muller, Nosizwe Dlengezele- Senyakoe, Jesmane Boggenpoel, Trudi Makhaya, Mameetse Masemola, Muzi Mthethwa, Bongumusa Makhathini, Vuyo Ncwaiba…I will never be able to name all those who have touched and inspired my journey. Then there are my friends from high school and the Rhodes University campus, folks who not only inspire and push me, but are also always there Rorisang Mzozoyana, Sheila Akinnusi, Elize Mabinya, Balakazi Gqobhoka, Judith Ndaba, Erica Ofori-Adomako, Eleanor Ofori-Adomako. My friends from these work streets are too numerous to mention today, but you know yourselves and I thank you. My Black Management Forum family, the leadership Sibongile Vilakazi, my colleagues in empowerment Mpho Moseki, Aviwe Metu, Ntombomzi Ngada, Phuti Kgano, Anelisiwe Gxumisa, Thulani Ngubane and many others. Our shared history and that of this historical organisation grounds me in purpose. To you all, I say, „Nangamso!“ May you keep building minds and souls. You inspire me to pay it forward, every day. How unbelievably fortunate I am to have the opportunity to walk this journey with you being a part of it. This post was initially published on LinkedIn in October 2023. „It is in your hands to make of our world a better one for all“ - Nelson Mandela
From geopolitics to national politics, whether you look at state-level/provincial or local politics; the same is true for multi-nationals, regional and local companies - we need more *responsible leadership*! At the risk of romanticising an imperfect past, I recall a time when inspired, hopeful, dutiful leadership seemed to be in greater abundance. A world where leaders took accountability for moving the needle forward in addressing the most challenging and oppressive realities of our collective history. My strong feelings on the topic may come from growing up in a South Africa that was inspired and led by great leaders like Nelson Mandela whose life IS a picture of sacrifice for the collective - imperfect, but clearly seeking to and realising meaningful, positive impact. Maybe it's the speed of change that we experienced as a country back then that makes me question why we don’t seem to think that we can realise even more today. Perhaps the optimism I have about what we can achieve comes from all the years of being reminded, „It is in your hands to make of our world a better one for all“ by Nelson Mandela himself. Yet, as the scale of risks becomes more global and the impact likely more catastrophic we see greater desire to pursue power, profit, political, and individual goals at the expense of rights we’d agreed were inviolable. We're willing to get our own at the cost of the other, a minority group or the collective. But, a world that doesn't work for the whole is not sustainable. This is also true of conversations around AI. Speaking about and handling AI as something happening on its own, devoid of human involvement and control shows a desire to abdicate responsibility. The opposite of responsible leadership. AI, just like any other tool at the hands of humanity needs leaders in tech, business and government to take responsibility for putting the governance, processes, controls and resources needed to deploy AI responsibly into place. It's in our hands to balance political, business, entity and individual goals with collective goals. It's in our hands to ask difficult questions and engage earnestly about what we deploy and whether all necessary design considerations, tests and validations; governance steps, incl. ethical and regulatory approvals, etc. were undertaken. It's our responsibility to only deploy safe and responsible AI. To move the needle, yes, but in a way that is protective of our collective future. Much as we try to distance ourselves from AI with sound bites that present it as an unknown child that is advancing from the horizon, AI is our baby and will shape the legacy we hand over to the coming generations. This post was initially shared on LinkedIn in August 2023. ![]() Reading the headlines recently, you could really end up in panic thinking about the imminent end to humanity that AI - all on its own - is about to herald. The dystopian headlines about AI ending humanity have been abuzz lately. From the reporting done on the open letter signed by tech top brass; to dire warnings from Geoffrey Hinton - dubbed the father of AI; the rather dystopian interview where Mo Gawdat, inter alia, recommends people hold off on having babies; to recent reporting that „42% of CEOs say AI could destroy humanity in five to ten years“ amongst others. So, I know it’s difficult be nuanced in headlines and that a lot of this comes from the very real need to draw attention to the critical need to address the „dark side of AI“. To create a sense of urgency, especially since the enthusiasm from the supply side tends to be hyper-optimistic and, more often than not, fails to touch on the risks. However, these dramatic headlines are taking a technology that’s already feared, but has a lot to offer, and ginning up the fear the general public is already grappling with to an extreme that can overwhelm. This is often done without getting specific about what the risks are, whether what we’re facing is - in fact - manageable and outlining where the accountability for addressing concerns sits. It would be great if the leading voices of our time, when it comes to this topic, would use their platforms to improve the understanding of AI, the risks involved and recommended clear steps for mitigating against the very real risks that some AI developments pose. Reading all this I, admittedly, still haven’t figured out why we would let AI destroy us, in the literal sense without simply turning off the electricity. After all, software runs on (physical) digital infrastructure, which needs electricity and cooling - all of which need to be guaranteed by humans. Further, the most progressive forms of AI use machine learning and deep learning. If we oversimplify things, these techniques use patterns and relationships derived from large training data sets, together with calculations and weightings to help statistical models decide (based on probability) what the most accurate output should be when faced with new data inputs. AI is programmed to self-adjust based on what it gets right and, in this manner, self-improve. Frequent repetition and a lot of data brings it to a point where it will most likely get the output (answer, decision, or action) right more often than a human would in the area in which it is trained. This efficiency, accuracy and the presentation of AI-based solutions appears to stimulate cognitive human behaviour. However, it is still stats, maths, and programming. So even if it claims to be sentient or lonely, such claims are more than unlikely. Perhaps the challenge lies in the terminology, we hear of artificial intelligence, neural networks - often discussed with automation - and think of complex brain activities that we - as humanity - still haven’t truly figured out. So that would be scary, right? But that’s just not what AI is. For example, neurons within a neural network are mathematical functions whose job it is to take input values, classify them and pass them on to the next processing layer in the network with the end-goal of selecting, based on weightings, the output most likely to be correct. It’s still a very long road before we get to a point of truly mimicking human brain function, if we ever will. Humans are at play in making training data available; defining algorithms; determining initial weightings; programming AI-based solutions; “teaching” models - in the case of supervised learning or assessing the patterns AI has found - in the case of unsupervised learning; testing; and deploying AI-based applications and systems. In reality, what we need to worry about are people and AI, not AI “running off” on its own and bringing humanity’s Armageddon. As with other technologies before it, AI is neither good nor bad. It is powerful. It can help make decisions and take actions efficiently, effectively and at scale. It can process and “learn” from massive amounts of data at a scale and speed we haven’t seen before. So, AI does all that in a way that’s not humanly possible – that’s its power. The desirability of the decisions and actions AI makes will depend on:
The challenge with hyper efficiency, speed and scale is that fewer people are needed behind these systems, for much greater impact. The above and the risks described below illustrate the concerns that are driving the research and discussions around understanding the dark side of AI. When we look at the AI risk examples listed by the Centre for AI Safety, we see that at the core of the challenge that we are facing is that AI - in the main – is, and will likely, scale and sharpen existing man-made risks. Some of these risks are founded on self-centred, profit driven, winner-takes-all and power-mongering approaches to developing AI solutions. These include:
The amounts of data that AI are trained on are generally so massive that it’s a huge challenge for humans, even those that develop the solutions, to trace back the basis for AI decisions. Further, most of the risks listed above are preceded by decisions made by human beings. Therefore, our focus should be on how we hold humans developing AI solutions and humans leading organizations accountable for the AI solutions they build and deploy. In many cases, not counting inter-country tensions and prospective violations, there are already laws in place to mitigate against these risks like privacy laws, anti-discrimination laws, human rights laws, laws against harming human beings. Organisations largely have policies that promote a positive contribution to society and good ethics. Existing regulations and policies are largely not being enforced, in part because:
There are some complexities that need to be addressed, e.g., what copyright in this context means, how to monitor the use of data to ensure adherence to privacy laws, how to identify uses that violate human rights, how to make AI more explainable etc. However, we don’t need to start at zero as it’s sometimes suggested. Even as new laws are developed for the scenarios not covered by existing legislation, a strong focus must be on enforcing existing laws and thinking carefully about how new laws will be enforced. Additionally, just as humans have had to deal with other global scale risks such as those posed by nuclear weapons, conversations must start on how risks should be mitigated at an international level, where it is much harder to hold individuals to account. The AI “arms race” has already started. In conclusion, what is needed is not wide-spread panic but informed, fair, pragmatic, and deliberate action. The reality is that, as it stands, AI is not advanced enough for many of these risks to materialise in a meaningful way. That said, we see the trajectory and, rightly, efforts must be made to improve knowledge about what AI is and how it works for the general public, businesses, governments, legislators and the people responsible for the development, oversight, control and enforcement of policies and legislation so that these risks can be mitigated. Demystification is the first step towards reducing the panic we see, followed by taking knowledge-based action. Governance, control and enforcement mechanisms within organisations and nations states must be reinforced, empowered, and properly resourced. At an international level, deterrence mechanisms are needed to mitigate against nation states using AI to usurp others or cause harm at a global scale. Technology should serve humans and only humans can make it NOT so. What are you most scared of when it comes to artificial intelligence, if anything? Let’s discuss the likelihood of that materialising. Picture credit: Image generated using a combination of Bing Image creator and Adobe Express This article was first published on LinkedIn in June 2023. Resources: Ågerfalk, P.J., Conboy, K., Crowston, K., Lundström, J.E., Jarvenpaa, S.L., Mikalef, P., Ram, S., 2021. (1) (PDF) Artificial Intelligence in Information Systems: State of the Art and Research Roadmap [WWW Document]. ResearchGate. URL https://www.researchgate.net/publication/357093816_Artificial_Intelligence_in_Information_Systems_State_of_the_Art_and_Research_Roadmap (accessed 4.5.23). AI Risk | CAIS [WWW Document], n.d. URL https://www.safe.ai/ai-risk (accessed 6.16.23). Bhaimiya, S., 2023. A former Google exec warned about the dangers of AI saying it is “beyond an emergency” and “bigger than climate change” [WWW Document]. Bus. Insid. URL https://www.businessinsider.com/ex-google-officer-ai-bigger-emergency-than-climate-change-2023-6 (accessed 6.18.23). Egan, M., 2023. Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years | CNN Business [WWW Document]. CNN. URL https://www.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html (accessed 6.16.23). Elon Musk among experts urging a halt to AI training, 2023. . BBC News. Kleinman, Z., Vallance, C., 2023. AI “godfather” Geoffrey Hinton warns of dangers as he quits Google - BBC News [WWW Document]. URL https://www.bbc.com/news/world-us-canada-65452940 (accessed 5.4.23). Mikalef, P., Conboy, K., Lundström, J., Popovič, A., 2022. Thinking responsibly about responsible AI and ‘the dark side’ of AI. Eur. J. Inf. Syst. 31, 1–12. https://doi.org/10.1080/0960085X.2022.2026621 Mirbabaie, M., Brendel, A.B., Hofeditz, L., 2022. Ethics and AI in Information Systems Research. Commun. Assoc. Inf. Syst. 50, 726–753. https://doi.org/10.17705/1CAIS.05034 Palmer, S., 2023. “Bekommen Sie keine Kinder”: KI-Experte Mo Gawdat warnt eindringlich [WWW Document]. euronews. URL https://de.euronews.com/next/2023/06/11/bekommen-sie-keine-kinder-wenn-sie-noch-keine-haben-warnt-ki-experte-mo-gawdat (accessed 6.18.23). Palmer, S., 2023. “Hold off from having kids if you are yet to become a parent,” warns AI expert Mo Gawdat | Euronews [WWW Document]. euronews. URL https://www.euronews.com/next/2023/06/08/hold-off-from-having-kids-if-you-are-yet-to-become-a-parent-warns-ai-expert-mo-gawdat (accessed 6.18.23). Pause Giant AI Experiments: An Open Letter, 2023. . Future Life Inst. URL https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (accessed 6.16.23). Statement on AI Risk | CAIS [WWW Document], n.d. URL https://www.safe.ai/statement-on-ai-risk#open-letter (accessed 6.16.23). Uh Oh, Chatbots Are Getting a Teeny Bit Sentient [WWW Document], 2023. . Pop. Mech. URL https://www.popularmechanics.com/technology/a43601915/ai-chatbots-may-be-getting-sentient/ (accessed 6.18.23). Vasilaki, E., 2018. Worried about AI taking over the world? You may be making some rather unscientific assumptions [WWW Document]. The Conversation. URL http://theconversation.com/worried-about-ai-taking-over-the-world-you-may-be-making-some-rather-unscientific-assumptions-103561 (accessed 6.18.23). |
Archives
July 2024
Categories |
Inno Yolo |
|