This paper describes the architecture, methodology and results of the Genuine2 chatbot for the Alexa Socialbot Grand Challenge 4. In contrast to previous years, our bot heavily relies on the usage of different types of generative models coordinated through on a dialogue management policy that targets dialogue coherence and topic continuity. Different dialogue generators were incorporated to give variability to the conversations, including the dynamic incorporation of persona profiles. Given the characteristics and differences of the response generators, we developed mechanisms to control the quality of the responses (e.g., detection of toxicity, emotions, avoiding repetitions, increase engagement and avoid mislead- ing/erroneous responses). Besides, our system extends the capabilities of the Cobot architecture by incorporating modules to handle toxic users, question detection, up to 6 different types of emotions, new topics classification using zero-shot learning approaches, extended knowledge-grounded information, several strategies when using guided (predefined prompts), and emotional voices. The paper finishes with analysis of our results (including ratings, performance per topic, and generator), as well as the results of a reference-free metric that could complement the capabilities of the ranker to select better answers from the generators.