Customer-obsessed science


Research areas
-
June 12, 2025Novel architecture that fuses learnable queries and conditional queries improves a segmentation model’s ability to transfer across tasks.
Featured news
-
Quantum Science and Technology2025Running quantum algorithms protected by quantum error correction requires a real time, classical decoder. To prevent the accumulation of a backlog, this decoder must process syndromes from the quantum device at a faster rate than they are generated. Most prior work on real time decoding has focused on an isolated logical qubit encoded in the surface code. However, for surface code, quantum programs of utility
-
AAAI 2025 Workshop on AI for Time Series Analysis2025Time series forecasting has long been a focus of research across diverse fields, including economics, energy, healthcare, and traffic management. Recent works have introduced innovative architectures for time series models, such as the Time-Series Mixer (TSMixer), which leverages multilayer perceptrons (MLPs) to enhance prediction accuracy by effectively capturing both spatial and temporal dependencies
-
IEEE Symposium on Security and Privacy 20252025We propose plausible post-quantum (PQ) oblivious pseudorandom functions (OPRFs) based on the Power-Residue PRF (Damgård CRYPTO’88), a generalization of the Legendre PRF. For security parameter λ, we consider the PRF Goldk(x) that maps an integer x modulo a public prime p = 2λ ·g+ 1 to the element (k + x) g mod p, where g is public and log g ≈ 2λ. At the core of our constructions are efficient novel methods
-
CVPR 2025 Workshop on Efficient Large Vision Models2025Diffusion models enables high-quality virtual try-on (VTO) with their established image synthesis abilities. Despite the extensive end-to-end training of large pre-trained models involved in current VTO methods, real-world applications often prioritize limited training and inferencing/serving/deployment budgets for VTO. To solve this obstacle, we apply Doob’s h-transform efficient fine-tuning (DEFT) for
-
EMNLP 2024 Workshop on Customizable NLP, Transactions on Machine Learning Research2025Precise estimation of downstream performance in large language models (LLMs) prior to training is essential for guiding their development process. Scaling laws analysis utilizes the statistics of a series of significantly smaller sampling language models (LMs) to predict the performance of the target LLM. For downstream performance prediction, the critical challenge lies in the emergent abilities in LLMs
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all