Access Bank Plc is a full service commercial Bank operating through a network of about 366 branches and service outlets located in major centres across Nigeria, Sub Saharan Africa and the United Kingdom. Listed on the Nigerian Stock Exchange in 1998, the Bank serves its various markets through 4 business segments: Personal, Business, Commercial and Corporate & Investment banking. The Bank has over 830,000 shareholders including several Nigerian and International Institutional Investors and has enjoyed what is arguably Africa's most successful banking growth trajectory in the last ten years ranking amongst Africa's top 20 banks by total assets and capital in 2011. As part of its continued growth strategy, Access Bank is focused on mainstreaming sustainable business practices into its operations. The Bank strives to deliver sustainable economic growth that is profitable, environmentally responsible and socially relevant.
Description
- Secure machine learning and AI systems from design and development through deployment and operation.
- This role sits within cybersecurity, working with data science, engineering, and IT governance teams to ensure AI systems are resilient, trustworthy, and compliant with internal standards and external regulations, translating emerging AI risks into practical, scalable security controls.
Key Responsibilities
- Identify, assess, and mitigate security risks across AI/ML pipelines, including data ingestion, model training, deployment, and inference.
- Implement controls to protect against AI-specific threats such as data poisoning, model theft, prompt injection, adversarial inputs, and model inversion.
- Support secure deployment of AI models in cloud, containerized, and API-based environments.
- Assist in securing third-party and open-source models, frameworks, and datasets.
- Contribute to AI security risk assessments, threat models, and control mappings.
- Support the development and enforcement of AI security standards, guardrails, and secure-by-design patterns.
- Align AI security practices with broader enterprise risk, privacy, and compliance requirements (e.g., ISO 27001, NIST, GDPR, emerging AI regulations).
- Participate in AI governance forums and provide security input to model approval and review processes.
- Help define logging, monitoring, and alerting requirements for AI systems.
- Support investigation and response to AI-related security incidents or misuse.
- Track vulnerabilities and emerging threats related to AI platforms and tooling.
- Work with data scientists and engineers to embed security into AI development workflows and CI/CD pipelines.
- Provide guidance on secure data handling, model access controls, and secrets management.
- Contribute to internal training, documentation, and awareness around AI security risks and best practices.
- Stay current with evolving AI threats, attack techniques, and defensive controls.
- Evaluate AI security tools and capabilities, making recommendations for improvement.
- Contribute to the organization’s longer-term AI security roadmap.
- Perform other responsibilities as assigned by the Head, Security Technology & Engineering
Qualification & Experience
Mandatory:
- Bachelor’s Degree in Computer Science, Information Security, or a related field
- Scripting or automation skills (e.g., Python).
- Certifications such as CISSP, CCSP, Security+, or emerging AI/security credentials
Desirable:
- Cloud security certifications (e.g., AWS Security Specialty, Azure Security Engineer).
Required Knowledge, Skills and Abilities:
- Experience in cybersecurity, cloud security, application security, or data security roles.
- Working knowledge of machine learning concepts and AI system architectures.
- Understanding of common AI/ML security risks and threat models.
- Hands-on experience with ML frameworks or platforms (e.g., PyTorch, TensorFlow, SageMaker, Azure ML).
- Experience securing APIs, cloud services, containers, or MLOps platforms.
- Ability to communicate security risks clearly to technical and non-technical stakeholders
- Familiarity with data governance, privacy engineering, or responsible AI principles.
- Experience with threat modeling techniques (e.g., STRIDE) applied to AI systems.
- Strong troubleshooting, documentation, and communication skills.
Method of Application
Signup to view application details.
Signup Now