The rapid scaling of artificial neural networks (ANNs) has highlighted the dual need for energy-efficient architectures and a deeper theoretical understanding of how models learn. This talk addresses both challenges, moving from the specific functional properties of Spiking Neural Networks (SNNs) to broader questions regarding the geometry of neural network loss landscapes. In the first part, I characterize the complexity and robustness of discrete-time Leaky Integrate-and-Fire (LIF) SNNs. We demonstrate that these networks realize piecewise constant functions on polyhedral regions and quantify how latency—a temporal dimension unique to SNNs—drives expressive power. Using Boolean function analysis, I further show that wide LIF-SNNs exhibit an inherent simplicity bias, where their Fourier spectra concentrate on low-frequency components, ensuring average-case stability under input perturbations. The final part of the talk transitions to a broader investigation of neural network optimization. I will present current work-in-progress regarding the intrinsic dimension of loss landscapes in general neural networks. By analyzing the minimum subspace dimension required to reach high-quality solutions, we can better understand the structural redundancy of overparameterized models. I will conclude by discussing the implications of these geometric properties for model compression and the efficiency of the optimization process across diverse architectures.