Neuro-symbolic AI is a research area aiming to combine neural networks with symbolic reasoning. The idea is that neural networks and symbolic methods have opposing strengths and weaknesses, and by combining the two, one could alleviate their respective weaknesses. Considerable effort has gone into this research area over the past decade, with neuro-symbolic models clearly outperforming purely neural or symbolic methods. However, to date, neuro- symbolic AI has not been widely adopted by machine learning practitioners. One reason is the additional overhead of the symbolic component of neuro-symbolic frameworks compared to purely neural models, which entails limited scalability. Another limitation is that users of such methods generally need a background in both neural models and symbolic reasoning, which are very different. Finally, most proposed frameworks have been applied to a specific dataset and are not general enough to apply to different problem settings. In this thesis, we start by providing a new map of the neuro-symbolic AI landscape, identifying direct links between architectural choices and strengths of neuro-symbolic frameworks. This allows us to identify weaknesses and potential future avenues for research, including the work presented in this thesis. We then present four frameworks, all aiming at reducing the need for a background in symbolic AI while focusing on scalability and generality. First, we present Concordia, a neuro-symbolic AI framework building on efficient lifted logical reasoning frameworks. Concordia significantly reduces the need for data compared to purely neural models. In addition, it is model-agnostic towards both the neural and logical models, enabling the usage of very different neural models and reducing the need for a background in symbolic reasoning or neural networks, as these can be treated as black-boxes. Due to the efficient reasoning and model-agnosticism, Concordia becomes the first neuro-symbolic framework applied to a wide range of applications. Second, we present PRISM, an algorithm to find templates in structured data used for logical formulae finding. Compared to prior art, we increase scalability by pre-processing the data through hierarchical clustering and an O(nlnn) algorithm to find structural motifs in hypergraphs (compared to O(n3)). Third, we present SPECTRUM, a linear-time framework to learn rules from relational data. It achieves this by mining patterns in linear-time and performing efficient pre-ranking of rules based on a new utility measure estimating the quality of a logical theory. Both PRISM and SPECTRUM are model-agnostic and can be used for different probabilistic logical models. Finally, we present WFOMI a lifted reasoning algorithm for efficient inference in hybrid domains, which can be used on top of neural models for high-level reasoning. WFOMI reduces run-time complexity in some settings from #P-complete to polynomial-time.