Amazon Device Security and Privacy call for proposals - fall 2021
Enabling trustworthy compute environment from edge to cloud
About this CFP
Amazon Devices & Services is committed to offering the highest levels of security and privacy to customers from edge to cloud. Our security and privacy services use state-of-the-art machine learning and automated reasoning techniques in order to improve the trustworthiness of our service offerings for both the edge devices and the cloud. Our north star is to provide a higher assurance in terms of security and privacy from edge devices to cloud services. As such, we are looking to fund research in both machine learning and automated reasoning that targets security and privacy of edge devices and cloud. We are looking to fund research proposals that target the following topics:
- Automated security attack vector analysis for i) Embedded systems software, e.g., Trusted Execution Environment (TEE), Small OS Core, and ii) Device and Cloud Application software.
- Methods for generating labeled data to apply ML-based approaches to perform attack vector surface.
- Modeling a relationship between attack vectors and security complexity of a deployed software environment, potentially using the complexity metric to predict a likelihood that the software environment will experience a potential security incident.
- Development of automated reasoning tools or methodologies for formally proving the security properties in Embedded systems software (TEE, Small OS Core) and security protocols.
- Analysis of TEEs available in the market for formal verification applicability with formal verification results.
- Bridging the gap between formal specification and implementation of TEE or small OS core (i.e. how to automatically synthesize code monitors from formal specification).
- Development of automated reasoning tools for security protocols, TEE or small OS core, which can meet Common Criteria Certification.
- Offensive and defensive privacy for machine learning.
- Novel privacy attacks, either opaque (outputs only) or transparent (full access to model internals) attacks.
- Comprehensive survey of privacy attacks and/or novel work to consider how various parameters influence model vulnerabilities
- Model size.
- Model family (e.g. CNN, RNN, random forests).
- Model task (e.g. classification, regression, clustering, feature learning).
- Access to model internals (transparent-box attacks versus opaque box).
- Ability to inject training data (e.g. privacy attacks through poisoning).
- Research focusing on privacy attacks and defenses for commonly used commercial models other than text and computer vision. Examples include collaborative filtering/recommender systems, ranking models, audio models, autoencoders and more.
Submission period: August 16 - October 8, 2021
Decision letters will be sent out February 2022
Selected Principal Investigators (PIs) may receive the following:
- Unrestricted funds, no more than $80,000 USD on average
- AWS Promotional Credits, no more than $20,000 USD on average
Awards are structured as one-year unrestricted gifts. The budget should include a list of expected costs specified in USD, and should not include administrative overhead costs. The final award amount will be determined by the awards panel.
Please refer to the ARA Program rules on the FAQ page.
Proposals should be prepared according to the proposal template. In addition, to submit a proposal for this CFP, please also include the following information:
- Please list the open-source tools you plan to contribute to.
- Please list the AWS ML tools you plan to use and data you plan to obtain.
ARA will make the funding decisions based on the potential impact to the research community and quality of the scientific content.
Expectations from recipients
To the extent deemed reasonable, Award recipients should acknowledge the support from ARA. Award recipients will inform ARA of publications, presentations, code and data releases, blogs/social media posts, and other speaking engagements referencing the results of the supported research or the Award. Award recipients are expected to provide updates and feedback to ARA via surveys or reports on the status of their research. Award recipients will have an opportunity to work with ARA on an informational statement about the awarded project that may be used to generate visibility for their institutions and ARA.