Capability
Ai Security And Safety Considerations Documentation
2 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
notes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references under the /Resources folder.
Unique: Treats AI security holistically across model-level risks (adversarial examples, poisoning), system-level risks (prompt injection, jailbreaking), and alignment risks (specification gaming, reward hacking)
vs others: More practical than academic safety research because it focuses on implementation guidance, but less detailed than specialized security frameworks