Conventional open-domain dialogue modeling combines hand-designed dialogues and machine learning models. The fundamental architecture contains predefined conversational flows and responder’s selection strategies as essential components to guide the chat. This design usually directs dialogues with hard-coded rules that easily fail from rare, new scenarios or out-of-domain conversations. For example, a topic classification model used to select an appropriate responder may be inaccurate. Or even, unseen words may cause mistakes in identifying the corresponding responder. This error propagation from a hybrid system largely hinders a chatbot’s generalization ability and decreases the conversation quality. In this work, we introduce a pure neural responder aggregated architecture equipped with the knowledge base. Our main architecture does not contain rules or predefined heuristics, but only neural responders and rankers. We show that this simple and elegant architecture potentially outperforms the traditional logic flow and selection strategy-driven dialogue management on open-domain dialogue modeling. It is easy to code, simple to optimize, and robust to generate responses in new domains.