This paper addresses the problem of determining the location of a ground level image by using geo-referenced overhead imagery. The input query image is assumed to be given with no meta-data and the content of the image is to be matched to a priori constructed reference representations. The semantic breakdown of the content of the query image is provided through manual labeling; however, all processing involving the reference imagery and matching are fully automated. In this paper, a volumetric representation is proposed to fuse different modalities of overhead imagery and construct a 3D reference world. Attributes of this reference world such as orientation of the world surfaces, types of land cover, depth order of fronto-parallel surfaces are indexed and matched to the attributes of the surfaces manually marked on the query image. An exhaustive but highly parallelizable matching scheme is proposed and the performance is evaluated on a set of query images located in a coastal region in Eastern United States. The performance is compared to a baseline region reduction algorithm and to a landmark existence matcher that uses a 2D representation of the reference world. The proposed 3D geo-localization framework performs better than the 2D approach for 75 % of the query images.