>
By Emmanuel Adeleke
The Executive Vice Chairman (EVC) of the Nigerian Communications Commission (NCC), Dr Aminu Maida, has said organisations must handle citizens’ personal data responsibly when using Artificial Intelligence (AI).
Maida stated this in Abuja during an event to commemorate the 2024 World Consumer Rights Day, on Friday. The theme of this year’s celebration was “Fair and responsible Al for consumers.”
The NCC boss, who was represented by the Executive Commissioner, Technical Services Designate, Abraham Oshadami, said AI has already made significant strides from voice assistants to recommendation algorithms that suggest what we should watch, read or buy.
The EVC said AI is also driving innovations in healthcare, finance, transportation, and countless other fields, stressing that despite these innovations, using AI responsibly is crucial to guaranteeing consumers’ trust and circumvent possible problems.
“As we celebrate the advancements in AI, we must also grapple with ethical questions.
”How do we ensure that AI systems are fair and unbiased? How do we protect privacy in an age of data-driven AI? These are complex issues that require careful consideration.
“Responsible AI means using it in an ethical way throughout its development, deployment, and usage.
“This includes considering issues like bias, privacy, transparency, and accountability.
”According to reports, responsible AI aims to empower consumers, build trust, and minimize negative effects.
“To this effect, AI developers need to be transparent about the data, algorithms, and models used in AI systems.
“This ensures that decisions made by AI can be explained and mistakes can be fixed to ensure everyone is treated fairly, regardless of their background.
“This helps prevent biased decisions or discrimination thereby promoting inclusivity and equality.
“Protecting citizens’ privacy is extremely important when using AI. Organisations should handle personal data responsibly, following strict privacy regulations. Respecting privacy builds trust in AI systems,” he said
Maida added that responsible AI requires mechanisms for holding systems accountable and explaining their decisions.
He said developing regulations and policies to govern AI deployment can be complex.
“Although most Legislative and governing bodies are looking to regulate this technology, there has been continuous struggle to strike the right balance between risk mitigation and stifling innovation, while promoting innovation and ensuring security and trust.
“In this era that has seen the rise of AI and IoT cybersecurity, it is important to break silos and foster collaboration of the quadruple helix innovation model comprising of the academia, the industry, government and society to share ideas. AI developers and regulators have to ensure AI system algorithms consider ethics and inclusivity,” he said.