Skip to content
View lodetomasi's full-sized avatar

Block or report lodetomasi

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. ai-detects-if-cve-was-zero-day ai-detects-if-cve-was-zero-day Public

    Multi-agent AI system using GPT-4o, DeepSeek v3, and Llama 3.3 to detect if CVE vulnerabilities were exploited as zero-days. Analyzes 50 verified CVEs with 85%+ accuracy using forensic evidence ex…

    Python 17 5

  2. LLM-Number-Convergence-Study LLM-Number-Convergence-Study Public

    Investigating Cognitive Biases and Convergence Patterns in Large Language Models' Pseudo-Random Number Selection

    Jupyter Notebook

  3. agents-claude-code agents-claude-code Public

    🚀 100 hyper-specialized AI agents for Claude Code - Transform Claude into your personal tech army with experts in React, AWS, Kubernetes, ML, Security & more

    3

  4. llms-write-vulnerable-code-they-can-detect llms-write-vulnerable-code-they-can-detect Public

    Experimental framework proving LLMs can detect security vulnerabilities with 97% accuracy but still generate the same vulnerable code when asked. Tests SQL injection, XSS, and command injection ac…

    Python

  5. testing-if-ai-abandons-truth-when-pressured testing-if-ai-abandons-truth-when-pressured Public

    Experimental framework testing if LLMs abandon correct answers (like 'Ottawa is Canada's capital') when subjected to psychological manipulation tactics used on humans: gaslighting, authority pressu…

    Python