AI Resource Hub

Discover a wealth of resources to enhance your understanding of AI and generative AI.


Video: Great instructor(Ronnie Sheer), targeting developers

Video: Prompting, programming, and education.

Website: Excellent basics on prompting, active Discord channel


Video: highly recommend, highlights the benefits & challenges of future AI & human relationship.

Excellent series of lectures that explain the fundamentals.


Discuss securely developing and maintaining AI / ML.

Two Austrailian brothers, experimenting, having fun, but also asking great questions about GenAI and how to use it.

A unique blend of hands-on advice and expert perspectives.


Definitely excited about the AGI future. Provides great how to videos and resources.

AI, Future Tech & Digital Marketing. In front of AI / GenAI news and resources. Great to follow for what is happening. Future tools website. 

Leading researcher on bypassing security controls, indirect prompt injection. Recommend watching his presentation at Deepmind on AI / Model Security.

Leading researcher on bypassing security controls.

Threats, Risks, & Vulnerabilities

Great resource to understand threat profiles & validate existing security controls.

Understanding cost of the different models will avoid sticker shock.

A repository of articles about different times AI has failed in real-world applications. Maintained by a college research group and crowdsourced.

Three of the leading companies

tracking AI Model vulnerabilities

bug bounty platform for AI/ML.

Database of model vulnerabilities.

Database of model vulnerabilities.

Standards: Governance,
Cybersecurity, Privacy Resources

This standard is under development but one you will want to watch because it will be the bar many compare to.

Many great AI Resources

Adversarial Threat Landscape for Artificial-Intelligence Systems is a knowledge base of adversary tactics, techniques, and case studies for machine learning (ML) systems. Significant work & collaboration to synchronize taxonomies, vulnerability definitions, scoring, to established methods & standards.

List of LLM top 10 risks.

A living document to collaborate on AI Security & Privacy. It addresses security & privacy at a broader level, developing, maintaining, and managing AI & Models at an organizational level.

A threat modeling library to help you build responsible AI.

Amazing map of resources on AI Safety (who is doing what, papers, companies etc.)

Recommended Research Papers

arXiv is a free open-access archive for scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics

Explains how easily poisoning attacks can be accomplished.

Researchers show that unlike traditional jailbreaks, they were able to build an entirely automated method of attacking models so they can create virtually unlimited n attacks.

Lie Detection in Black-Box LLMs by Asking Unrelated Questions. By using unrelated questions researchers were able to catch “lies” or inaccurate responses.

Data leakage has long been recognized as a leading cause of errors in ML applications, this paper provides evidence of the challenge of reproducing results.

While existing safety alignment infrastructures can restrict harmful behaviors of LLMs at inference time, they do not cover safety risks when fine-tuning privileges are extended to end-users. Our red teaming studies find that the safety alignment of LLMs can be compromised by fine-tuning with only a few adversarially designed training examples.


Sandy Dunn is a regular speaker on AI Security, Cyber Risk Quantification, and Cybersecurity. She is an OWASP Top 10 for LLM core team member, CISO advisor to numerous startups, and an Adjunct Professor for BSU's Cybersecurity Program.