Customer-obsessed science


Research areas
-
July 29, 2025New cost-to-serve-software metric that accounts for the full software development lifecycle helps determine which software development innovations provide quantifiable value.
Featured news
-
QIP 20252025A central challenge in quantum simulation is to prepare low-energy states of strongly interacting many-body systems. In this work, we study the problem of preparing a quantum state that optimizes a random all-to-all, sparse or dense, spin or fermionic k-local Hamiltonian. We prove that a simplified quantum Gibbs sampling algorithm achieves a Ω(1 k)-fraction approximation of the optimum, giving an exponential
-
2025Given a small number of images of a subject, personalized image generation techniques can fine-tune large pre-trained text-to-image diffusion models to generate images of the subject in novel contexts, conditioned on text prompts. In doing so, a trade-off is made between prompt fidelity, subject fidelity and diversity. As the pre-trained model is fine-tuned, earlier checkpoints synthesize images with low
-
2025Large-scale vision-language pre-trained (VLP) models (e.g., CLIP [46]) are renowned for their versatility, as they can be applied to diverse applications in a zero-shot setup. However, when these models are used in specific domains, their performance often falls short due to domain gaps or the under-representation of these domains in the training data. While fine-tuning VLP models on custom datasets with
-
ICSPCN 20252025This paper introduces a new tri-lateration method that utilizes a unique cost function formulation to significantly enhance the performance of positioning systems. Fixed-position devices called locators or anchors, with predetermined coordinates, are used to determine and track the unknown location of a moving electronic tag. The optimization algorithm enhances accuracy by assigning greater importance to
-
2025Text-to-Image diffusion models have shown remarkable capabilities in generating high-quality images. However, current models often struggle to adhere to the complete set of conditions specified in the input text and return unfaithful generations. Existing works address this problem by either fine-tuning the base model or modifying the latent representations during the inference stage with gradient-based
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all