Overview

AI Security Researcher Jobs in Pittsburgh, PA at Cargenie Mellon University

Are you a cybersecurity and/or AI researcher who enjoys a challenge? Are you excited about pioneering new research areas that will impact academia, industry, and national security? If so, we want you for our team, where you’ll collaborate to deliver high-quality results in the emerging area of AI security.

The CERT Division of the Software Engineering Institute (SEI) is seeking applicants for the AI Security Researcher role. Originally created in response to one of the first computer viruses — the Morris worm – in 1988, CERT has remained a leader in cybersecurity research, improving the robustness of software systems, and in responding to sophisticated cybersecurity threats.  Ensuring the robustness and security of AI systems is the next big challenge on the horizon, and we are seeking life-long learners in the fields of cybersecurity, AI/ML, or related areas, who are willing to cross-train to address AI Security.

The Threat Analysis Directorate, is a group of security experts focused on advancing the state of the art in AI security at a national and global scale.  Our tasks include vulnerability discovery and assessments, evaluation of the effectiveness and robustness of AI systems, exploit discovery and reverse engineering, and identifying new areas where security research is needed. We participate in communities of network defenders, software developers and vendors, security researchers, AI practitioners, and policymakers.

You’ll get a chance to work with elite AI and cybersecurity professionals, university faculty, and government representatives to build new methodologies and technologies that will influence national AI security strategy for decades to come. You will co-author research proposals, execute studies, and present findings and recommendations to our DoD sponsors, decision makers within government and industry, and at academic conferences. The SEI is a non-profit, federally funded research and development center (FFRDC) at Carnegie Mellon University.

What you’ll do:

Develop state of the art approaches for analyzing robustness of AI systems.

Apply these approaches to understanding vulnerabilities in AI systems and how attackers adapt their tradecraft to exploit those vulnerabilities.

Reverse engineer malicious code in support of high-impact customers, design and develop new analysis methods and tools, work to identify and address emerging and complex threats to AI systems, and effectively participate in the broader security community.

Study and influence the AI security and vulnerability disclosure ecosystems.

Evaluate the effectiveness of tools, techniques and processes developed by industry and the AI security research community.

Uncover and shape some of the fundamental assumptions underlying current best practice in AI security.

Develop models, tools and data sets that can be used to characterize the threats to, and vulnerabilities in, AI systems, and publish those results. You will also use these results to aid in the testing, evaluation and transition of technologies developed by government-funded research programs.

Identify opportunities to apply AI to improve existing cybersecurity research.

Who you are:

You have a deep interest in AI/ML and cybersecurity with a penchant for intellectual curiosity and a desire to make an impact beyond your organization.

You have practical experience with applying cybersecurity knowledge toward vulnerability research, analysis, …

Title: AI Security Researcher

Company: Cargenie Mellon University

Location: Pittsburgh, PA

Category:

 

Upload your CV/resume or any other relevant file. Max. file size: 800 MB.