# ABSTRACTS

Parallel Session 1.1 (Room 205A)
Spatially Enabling Government I

The Geospatial Platform Initiative (261)
Ivan Deloatch, Douglas Nebert

The U.S. Government recently deployed the first version of a Geospatial Platform environment to coordinate and provide access to national geospatial assets. The Platform environment is linked to a catalog of nearly a million geospatial resources, including data sets and Web services. It provides discovery, evaluation, and visualization of maps in an integrated viewer. Select data assets representing official national data themes and data sets will be tracked using portfolio management techniques. Future capabilities will include the ability to discover and access raw geospatial data via Web Services, support publication of common services and applications, and enable Cloud-based analytical geoprocessing of data served through Web services.

The long and winding road that leads to SDI in the Americas (191)
Santiago Borrero, PAIGH, Nancy Aguirre

This paper, using a narrative perspective, adds to current knowledge on SDI background, context and evolution in the Americas over the last 15 years. It seeks to highlight particular milestones, situated as possible in the context of both The Americas and global realities. Since its inception, back in the 1990s, SDI concepts and pertinent initiatives have navigated through significant changes regarding technological, socio-political and philosophical issues, thus the authors intention is to focus on specific footprints characterizing identifiable development phases. Efforts for assessing SDI evolution in the Americas have followed diverse perspectives, many of them using quantitative and academic approaches based on periodic systematic surveys to deduce from them relevant SDI-changes. Consistently, pertinent findings have certainly increased awareness on the state of the art of local to national SDI initiatives, allowing regional baselines and representations. Conversely, key happenings informing this paper are drawn from the authors’ historical participatory observation and engagement, hence adding pragmatic perspectives to available SDI assessments. Results show that although mainstream SDI concepts and developments have fostered an evolution mirror in the Americas, a response to ‘regional difference’ on SDI motives, working language and capacity building, as well as an increased technological and knowledge ‘openness wave’, among many other issues, have paved the changes towards an unceasing advancement of SDI in the Americas. Likewise, a number of capacity building and funding opportunities particularly promoted by both sub-regional and global organizations have led to a more inclusive knowledge and information sharing within the region. Certainly SDI evolution has progressively been nurtured by the launching and development of sub regional geospatial initiatives.

Determination of Core Geospatial Datasets and their Related Data Custodians for South Africa.
Sives Govender, EIS-Africa

South Africa is one of the few countries in the world that has a government funded Directorate (the National Spatial Information Framework) dedicated to coordinating and implementing a Spatial Data Infrastructure. The NSIF was responsible for drafting the SDI Act of 2003 which is also one of the few forms of legislation of its type in the world. However, despite dedicated human resources and legislative backing, South Africa's SDI efforts for almost a decade have not gained momentum especially with regard to the production of fundamental/core datasets needed to support spatial planning. Regular disputes over custodial issues have undermined the growth of the Geospatial Industry and left the GI community fragmented. In light of this, the Development Bank of Southern Africa (DBSA) in collaboration with the Committee for Spatial Information's: Data Sub-Committee commissioned a study to determine the criteria for and to define core geospatial datasets as well as determine their related custodians so as to accelerate the implementation of South Africa's, Spatial Data Infrastructure Act of 2003 (No. 54/ 2003). This presentation will outline the rationale for the study concluded in April 2012 (conducted by EIS-Africa and AfricaScope) as well as describe the outcomes and future strategies.

Beyond INSPIRE: Is a New SDI Paradigm Beginning to Emerge? (210)
Bruce McCormack

INSPIRE will be fully completed in 2019 but various European bodies (Joint Research Centre, EUROGI and others) are already beginning to think that there is an increasingly pressing need to move beyond INSPIRE and in so doing leverage the INSPIRE infrastructure within the European Interoperability Framework, European Digital Agenda, eGovernment and other broad relevant European initiatives. In probing the shape of a post-INSPIRE arrangement it would be necessary to take account of developments which are not covered by INSPIRE. Some of the issues which are not dealt with, or at least adequately dealt with, in INSPIRE are for example 3D, real time, Internet of Things, volunteer GI, social networking, open data, inside building location, open data and geotagging the hundreds of thousands of text, spreadsheet, audio, video, etc material which have a location aspects which are produced each day throughout all the European public sector. The current set of SDIs focus on organising historical or current data within a 2D and static context. If and when the abovementioned new issues are integrated within an SDI framework it may be that a ‘SDI2.0’ begins to emerge from what is currently the focus of SDI development, namely ‘SDI1.0’. The presentation would focus on the activities in Europe around shaping an outline of an ‘SDI2.0’ with particular emphasis on the role of EUROGI, the European Umbrella Organisation for Geographic Information in this process.

Parallel Session 1.2 (Room 205B)
Experiences & Case Studies I

Arctic Spatial Data Infrastructure (SDI): Pan-Arctic Cooperation among Ten Mapping Agencies (84)
Martin Skedsmo, O Palmer, M Gumundsson, Fraser Taylor

The Arctic SDI is a pan-Arctic cooperative initiative among ten National Mapping Agencies from Canada, Denmark, the Faroe Islands, Finland, Greenland, Iceland, Norway, the Russian Federation, Sweden and the United States. With the current interest in climate change, increased navigation, as well as natural resource extraction and management, the Arctic has been subjected to intense scrutiny in recent years. A wide array of spatial data has been generated, but these data have been largely managed nationally or dedicated to specific issues. As a result, existing datasets are distributed throughout multiple organisations; they are often not integrated or coordinated and it is difficult to find an environment in which these diverse datasets can be combined and analysed together. The aim of this project is to jointly develop an Arctic spatial data infrastructure (SDI) that reinforces pan-Arctic science and decision making. It seeks to establish technical collaboration among the national mapping agencies surrounding the Arctic in order to provide national geographic reference data as a basis for analysis and monitoring environmental and climate change. The information will be accessed and distributed through an SDI consisting of national servers that provide geographic datasets. The Arctic SDI will include: o reference data as Web Map Services that establish a common image and vector base for the Arctic at a nominal scale from 1:250,000 to 1: 1.000.000; o a searchable catalogue of mappable data, including base maps and other georeferenced thematic data and services; and o a web-based primary user interface for searching the catalogue and providing visual analysis capabilities of multiple base maps, thematic maps, and geographic data. The project is expected to result in the following: o Users, such as the Arctic Council, its Working Groups and the Arctic research community, will have easy access to relevant and updated geographic and thematic information covering the entire circumpolar region - data that can be used for many purposes and many different tasks. o A distributed regional/Arctic SDI consisting of interlinked servers with high quality national geographic data will be located in each of the eight Arctic countries. o Possibilities will be created for users to connect to Web Map Services and simultaneously access, view, and explore several types of geographic and thematic information concerning the Arctic Region. Expected benefits of the Arctic SDI include: o regular use of the Arctic SDIs' Web Map Services and other services by national authorities; o regular use of the project's Web Map Services in schools and universities in the Arctic and elsewhere; o possibilities for media to receive relevant and updated information; and o possibilities to foster cooperation with industry on Arctic issues. There is an obvious need for a dedicated Arctic SDI that provides for the development of the necessary standards and framework to encourage more sciences integration of, and access to, these datasets. It will allow for a more robust management and manipulation of data for both research and management purposes.

Some Operational Challenges in Data Sharing at the Global Level: the Global Map and GEOSS Experiences (8)
Fraser Taylor

Data Sharing is central to the effective use of location-based information and to a spatially enabled society. There are numerous articles and books about the issues surrounding data sharing many of which concentrate on the challenges of interoperability especially those of a technical nature. There are relatively few descriptions of the operational challenges involved especially at the global scale largely because few operational global spatial data infrastructures exist. The author is actively involved in two initiatives at the Global level. These are Global Map and GEOSS. This paper will discuss some of the operational challenges involved in data sharing at the global scale. The GEOSS Data Sharing Task Force has been in operation for three years and its most recent report which includes issues such as legal interoperability will be presented to the GEO-VIII Plenary in Turkey in November 2011. Global Map was created as a response to Agenda 21 of the Rio Summit almost 20 years ago. Both initiatives have faced technical challenges in data sharing but those are relatively easy to deal with. The major challenges in both cases have been political and administrative and these are rarely discussed in the literature especially using concrete examples rather than theoretical ones. The author will use his ongoing personal experience as Chair of the International Steering Committee for Global Mapping and a member of the GEOSS Data Sharing Task Force to describe some of the major challenges and how these were addressed.

Building the Canadian Spatial Data Foundry: An online portal for large scale spatial analysis (222)
Pierre Racine, Steve Cumming

Researchers in ecology and related discipline are facing increasing challenges in the use of spatial data, as exemplified by wildlife habitat or movement modeling. Typically, presence-absence or count data are collected in the field with GPS. They are then overlaid with spatial ecological covariates like land and forest cover, road proximity, elevation, hydrology and many others. The resulting tables are finally submitted to statistical analysis to determine which covariates and model forms best explain the observations. Ecological analyses are increasingly necessary over near-continental extents, such as all of Canada or North America. To do these, one must first acquire and integrate into a GIS gigabytes of geospatial data generally divided into thousands of mapsheets, and this, into a consistent format. Executing the spatial analyses remains technically difficult, time consuming, and often exceeds the capacity of GIS packages. Most researchers do not have the time, the patience or the skills to successfully and efficiently complete these tasks. Yet every year, thousands of graduate students, and senior researchers, spend months of their time trying to execute conceptually simple tasks on large datasets. We must say that the geomatics community has failed to provide adequate solutions to these users. The Canadian Spatial Data Foundry is an effort to resolve this problem. The design goals are simple: A central online repository of preprocessed, extensive geospatial coverages holds the spatial covariate; users upload their observational data and construct queries generally through buffer overlay operations; the system executes the queries, summarizes the outputs and distributes the results thought the web, email or ftp. We describe an international open source project that is developing the server tools and underlying GIS technology needed to implement the Foundry. The first step was to provide support for transparent raster/vector spatial operations within the PostgreSQL/PostGIS spatial database. This system can store terabytes of data and can now perform vector/raster spatial analysis in respectable time over very large extents. The second component is an online catalog allowing system administrators and users to precisely document covariates layers and users to upload and document their observational data. The third component is a sophisticated form interface allowing users to build and execute complex spatial queries over the covariates. Our talk will outline the major design features of each of the three components and illustrate their current implementation status, taking examples from national ecological analyses now underway.

“Last chance to see?” – what is the role of SDI’s in the race to halt biodiversity loss? (77)
Stephen Peedell, Andrew Cottam, Gregoire Dubois

Attempts to stem the rate of biodiversity loss worldwide have so far failed to produce the desired outcomes. A new impetus to address this problem has been given at the culmination of the International Year of Biodiversity in 2010 with the Convention on Biological Diversity (CBD) summit in Nagoya. Outcomes of the summit include a Strategic Plan for Biodiversity 2011-2020 and the Aichi Biodiversity Targets. Global initiatives such as these generate many challenges for data gathering, sharing, analysis and presentation, and highlight the need for concerted action, including in the domain of spatial information. Are existing SDI’s providing the necessary data and technological platforms for the biodiversity community? The BIOPAMA (Biodiversity and Protected Areas Management) project, jointly run by the European Commission’s Joint Research Centre and the International Union for Nature Conservation (IUCN), is addressing this question as it seeks to establish regional observatories for biodiversity information in the Africa, Caribbean, Pacific (ACP) region. BIOPAMA will be a pioneer opportunity to implement tools such as the Digital Observatory of Protected Areas (DOPA), which have been the outcome of recent JRC research projects. Much like the subject matter they deal with, the IT environments of initiatives such as BIOPAMA and DOPA are extremely diverse “ecosystems”, with components that are highly interdependent on one another. Whilst the classic SDI paradigm does much to facilitate information exchange, and for which there are many operational examples, there is an ongoing need to rapidly develop high-performance, sophisticated architectures for distributed modeling and geo-processing, which is pushing the boundaries of SDI and biodiversity informatics research. We will illustrate this through examples of our work on biodiversity monitoring across the globe.

Parallel Session 1.3 (Room 2103)
Technical Challenges I

SDI's and the Cloud, a solution to the problem of missing infrastructure? (276)
Ed Parsons

It may be argued that the development of Spatial Data Infrastructures has historically focused on the many problems around the creation, management and policies of access to geospatial information. But a successful SDI if it is to really have a socio-economic impact must be accessible to as large a number of stakeholders as possible, from government instituions to data driven startups to the individual interested citizen. This requires a powerful flexible and scalable technical infrastructure something which has historically been costly and difficult to create. Model cloud based computing technology offers the potential to solve this problem and put the I back into SDI !

Spatial Metadata Automation: A Key to Spatially Enabling Platform (100)
Hamed Olfat Mohsen Kalantari, Abbas Rajabifard, Hervé Senot, Ian Williamson

Semantic based extension of search capability of SDI (147)
Adam Iwaniak, Tomasz Kubik, Witold Paluszynski, Mateusz Tykierko, Iwona Kaczmarek

Technical Challenges in Development of Catalogue Maintenance Tools: a Vendor Perspective (223)
Bruce Westcott

Modeling Spatial Data Infrastructures based on Cloud Computing (37)

Typical Spatial Data Infrastructures are non-scalable, oriented to a top-down deployment and, generally, tied to the official providers of geographic information. This paper describes a model of Spatial Data Infrastructure based on Cloud Computing and Web 2.0. It is aimed to impact in a significant improvement of performance, accessibility to the users/providers and efficiency. The model is based on the Reference Model of Open Distributed Process RM-ODP, which considers five perspectives to describe a phenomenon: business, information, computation, engineering and technology. Finally, the implementation of the three layers of the model (SaaS, PaaS, IaaS) and an experimental validation are provided in a fleet management application based on SDI as a case study.

Parallel Session 1.4 (Room 2104A)
Assessment and Management

A Quantitative Framework for Measuring the Impact of Geo-standards on the SDI performance within work processes (252)
Danny Vandenbroucke, Joep Crompvoets, Jos Van Orshoven

Canadian Geospatial Data Infrastructure Performance Project (42)
Paula McLeod, Rhian Evans, Simon Riopel

The GeoConnections program, in its third mandate (2010-2015), will complete Canada’s national spatial data infrastructure, the Canadian Geospatial Data Infrastructure (CGDI), by ensuring that it is comprehensive, usable, performant, relevant and poised for future growth and development. To support the achievement of a complete and functional CGDI, as well as to position it for continued relevance, the definition of the CGDI and its components, the vision and way forward need to reflect changes in requirements and trends. In order to identify priorities for geospatial information access, sharing and use of the infrastructure, a multi-faceted assessment is being performed which will determine areas where work is still required to complete the CGDI. A multi-phased project, being executed over a five year time span (2010-2015), has been developed in order to assess the progress and performance of the CGDI according to the modernized definition and the updated vision and way forward. Measuring progress and performance requires a clear understanding of what the CGDI is today, and a clear vision of what the CGDI must become in the future. This presentation will outline the methodologies and present results of the first four phases of the project: CGDI modernized definition, Updated CGDI Vision, Mission and Roadmap, CGDI Assessment Framework development and the 2012 CGDI Assessment findings.

Monitoring the Performance and Reliability of Geospatial Web Services (158)
Michelle Anthony, Douglas Nebert

A Spatial Data Infrastructure (SDI) provides the basis for spatial data discovery, evaluation, and application by the geospatial community. The required technical framework of an SDI relies on standards-based Web services to provide the interoperability required for access and exchange of geospatial information. As the adoption and popularity of geospatial Web services grows, it is critical that they are optimally functional and reliable. The FGDC Service Status Checker is a system for the monitoring of geospatial web services. It provides real-time data on how geospatial Web services perform according to a battery of tests and how reliable they are over a specified time period. This monitoring system allows Geoportals and other applications to utilize the Service Status Checker Web service to monitor service performance and provide a mechanism for organizations to resolve service issues in a timely manner. Several Geoportals, including Geo.Data.Gov and GeoSUR, are integrating performance information from this real-time web service into their SDI systems.

An Investigation of SDI Assessment: Approaches (83)
Ali Javidaneh, Mahmood Reza Delavar

Knowledge is known as the basis of informed decision making to reach sustainable development. On the other hand, it has been shown that 80% of decisions made are spatially referenced. Therefore, spatial data have a key role in sustainable development. The main objective of spatial data infrastructure (SDI) is providing an effective provision of spatial data to users. One of the most important users of spatial data is decision makers who facilitates and coordinate sustainable development. Hence it is very important to provide them with the qualified spatially referenced data and services within the structured framework of an SDI. One important issue is how to assess the effectiveness and appropriateness of the SDI to meet the final aims of sustainable development of the knowledge-based societies. Although a number of approaches have been implemented to assess some aspects of SDI, few research have been reported on quantitative and qualitative measures, parameters and methodologies to assess SDI. This paper aims to provide a critical review of literature on SDI assessment approaches to provide some meta-uncertainty measures for evaluating the SDI.

Parallel Session 1.5 (Room 2104B)
Basic and Applied Research I

SPATIALIST: Spatial Data Infrastructures and Public Sector Innovation in Flanders (Belgium), The Final Results (80)
Joep Crompvoets, Glenn Vancauwenberghe, Danny Vandenbroucke, Katleen Janssen, Ezra Dessers, Lieselot Vanhaverbeke

This paper presents the final results of the interdisciplinary project SPATIALIST: Spatial Data Infrastructures and public sector innovation in Flanders (Belgium). This 4.5 year research project is funded by Agency for the Promotion of Innovation by Science and Technology in Flanders, started in September 2007 and ended February 2012. The key objective of this project is to determine the requirements for developing a successful Spatial Data Infrastructure in Flanders. Unique of this research project is that it aims to identify and analyse key elements affecting the spatial data infrastructure in Flanders from multi-disciplinary perspectives. In this way, organisational, public administration, legal, economic and technological elements are taken into account. This multi-disciplinary approach provide insights into the proportion between the elements that somehow contribute to the development of spatially enabling governments and societies. Starting point of the research project was the consideration of the concept of spatial data infrastructures as a network emphasizing the dynamic and heterogeneous interactions among numerous stakeholders. In this way, spatial data infrastructures are operationalised in terms of organisations (being the network nodes) that are producing and using spatial data in a shared environment, and flows of spatial data between these organisations (being the network links). This research project is strongly based on empirical studies. The resulted data are mainly collected by surveys and case studies. Regarding the surveys, two online questionnaires were sent to hundreds of public authorities at different administrative levels in Flanders (2008 / 2011). In this way, it was possible to identify and characterise the key spatial data flows among public authorities in Flanders. Regarding the case studies, four cases were selected: (1) spatial zoning planning, (2) flood risk mapping, (3) address management, and (4) traffic accidents registration. In the context of SPATIALIST, a case is defined as a business process between and within government organisations in Flanders, in which spatial data is accessed, used and exchanged. About 6 organisations were selected as embedded case within each case. The selection of the (embedded) cases was based on expected variations regarding the disciplinary key elements. The case study data were mainly collected by multiple in-depth interviews in each embedded case. In addition, a scenario study was executed to find the best implementation strategy for the further development of SDI in Flanders. The Multi-Actor Multi-Criteria Analysis (MAMCA) was used as a decision-making tool for evaluating the different implementation strategies. In order to increase the valorisation of the research results, SPATIALIST organises also high political summits focussing on the policy of geographic information in Flanders entitled ‘Vlaanderen Geoland’. The target groups of these annual events are decision-makers and practitioners who manage spatial data. During these events, the key research results are presented as well as practices justifying the research results. In this way, SPATIALIST would like to contribute to the implementation of a successful spatial data infrastructure in Flanders. In addition, this paper presents the main research methods applied, the coherence between the different analyses, and the final results relevant for the development of spatially enabling governments and societies in more detail.

Improving geographic information retrieval using fuzzy logic: the case of the Canadian GeoConnections Discovery Portal (49)
Rodolphe Devillers, Garnett Wilson, Orland Hoeber

Spatial data clearinghouses are a key component of Spatial Data Infrastructures (SDI), providing Web-based search engines that allow users to look for the geospatial datasets they need by browsing through large metadata repositories. Despite their relatively high number and diversity, these systems typically share a number of common parameters that support users in finding datasets that match their information needs. These include the possibility to filter the search results based on the spatial and temporal extents, the types of data, and using specific keywords. Although rarely tested rigorously, the effectiveness of these systems for finding the best possible datasets can be disappointing, often providing users with long lists of datasets that are only weakly related to their search goals. While a number of approaches have been proposed to improve the selection and ranking of the search results (e.g. using ontologies), most of these methods do not work with existing SDI catalogue structures and can then be harder to implement in a real context. This paper presents a geographic retrieval method that uses fuzzy logic to improve the effectiveness of search engines. The system developed provides a user interface that connects directly to an existing clearinghouse Web service. Using the Canadian GeoConnections Discovery Portal as an example, we compared datasets rankings from multiple searches using four different methods. The first method, currently used by the Canadian Discovery Portal, does not rank the results. The second one, term frequency/inverse document frequency (TF-IDF), is a typical information retrieval method based on the analysis of standardized text occurrence in the metadata. The third method was a fuzzy system that allows users to assign weights to the search parameters based on their importance. The fourth is a hybrid TF-IDF/Fuzzy system developed for this project that attempts to combine the benefits of TF-IDF and the Fuzzy system. The results obtained by the four methods were assessed by a geospatial expert for a number of different searches. Results indicate that the pure Fuzzy system provided the most satisfactory results, although the improvement level is dependent on the specific aspects of each search task. This study suggests that making very simple changes to existing search mechanisms could result in significant improvements in search effectiveness and ultimately in user satisfaction. This improvement of the ranking of results may not only allow increased access to the data, but it may also reduce the risk of having users misuse datasets that are not appropriate for the intended use.

Model for assessing GIS maturity of an organization (123)
Jaana Mäkelä

A GIS maturity model can be used as a tool to evaluate how mature an organization is in utilizing spatial data in its businesses. A new GIS maturity model was developed in cooperation with the SDI utilization working group of the Finnish National Inspire Network. The model takes comprehensive account of the internal SDI of the organization, processes and services in which spatial data could be used, and capabilities such as leadership, the communication of spatial data, and both the internal and external cooperation. Three cities, a state institute, and a private company assessed their GIS maturities with the new model and gave feedback about the usability of the model. The results show that the new GIS maturity model reliably measures the present maturity level and is applicable to diverse organizations.

Desirable User-Identified Characteristics of Online Data Repositories for Spatially-Referenced Data (204)
James Campbell

A significant body of spatially-referenced, locally produced data exists on the hard drives and back-up systems of individual researchers, schools, non-profit groups, private associations, small companies, and other non-governmental organizations across the United States which is now currently unavailable to professional scientists and to the general public. If there were an online environment, a “Commons of Geographic Data,” where that data could be deposited, what infrastructure characteristics might potential users find desirable in order for them to be willing and interested in finding, consulting, and using such data? This study posited three such potential characteristics as desirable: make conditions of use of data files clear to potential users; provide a variety of ways to search for data; and enable users to access comments and feedback from prior users, and add comments of their own. Using a combination of interviews and an online questionnaire, the desirability of these infrastructure capabilities was examined. The results could be useful to those who may design and/or operate online repositories for spatially-referenced data.

Parallel Session 1.6 (Room 2101)
Industry Showcase I

Tecterra
Gouvernement du Québec
Boreal Informations Strategies
Intergraph|ERDAS
Hydro-Québec
North West Geomatics Ltd.
Effigis Geo Solutions
OpenGeo
Solutions Consortech Inc
K2 Geospatial
Bentley
Safe Software Inc.
Compusult

Parallel Session 1.7 (Room 2105)
Quebec City Showcase
Moderator: Steeve Guillemette

Dans un souci de toujours mieux servir les citoyens, de plus en plus de services de la Ville ont recours au géospatial pour localiser ou analyser différentes problématiques reliées à une organisation municipale de l’envergure de Québec. Dans cette optique, les présentateurs démontreront la plus value de la géomatique dans le quotidien de la Ville grâce à des projets concrets et actuels. De plus, ces projets sont pour la plupart supportés par une pléiade d’expertises externes de pointe de la région. Le tout se divise en quatre temps. Le premier volet montrera la base ou le noyau au centre de nos applications qui est la donnée et la carte interactive qui est l’outil principal de diffusion de la donnée géospatiale. En deuxième temps, il sera question des applications géomatiques et des processus reliés aux infrastructures municipales. Par la suite, ce sera une application de gestion des signaux lumineux pour le déneigement à distance qui sera présentée. Pour finir, le Service de l’aménagement du territoire de la Ville de Québec démontrera comment la géomatique a pu aider à la prise de décision efficiente dans son domaine d’affaires.

Carte interactive : noyau des applications et services
Mathieu Avery

Applications pour gérer les infrastructures municipales

Gestion des signaux pour le déneigement
Philippe Charland

La Géomatique au Service de l’aménagement du territoire
Denis Jean

Parallel Session 2.1 (Room 205A)
Spatially Enabling Government II

GeoSUR, Building the SDI Foundations in Latin America and the Caribbean (12)
Eric van Praag, Santiago Borrero, Carolina Morera, Matthew Cushing

GeoSUR is a regional program whose aim is to implement an effective inter-institutional mechanism for generating, disseminating, and exploiting geospatial data useful for decision-making in Latin America and the Caribbean (LAC). The Program has developed five main components: i) a Regional Geoportal, ii) a decentralized network of map services, iii) a LAC regional Map Service, iv) a Topographic Processing Service, and v) focused regional geoprocessing tools for energy assessment and disaster early warning. Funding and oversight for this initiative is provided by CAF the Latin American Development Bank, while counterpart support and coordination is provided by the Pan-American Institute of Geography and History (PAIGH) and technical assistance is provided by the U.S. Geological Survey Center for Earth Resources Observation and Science (USGS/EROS) and the National Geographic Institute of Colombia. articipating agencies include, but are not limited to, national geographic institutes and national environmental agencies from the region. In total, more than 55 national agencies have agreed to participate in the GeoSUR Program thus far - and more are expected to join in the short term. GeoSUR has adopted a decentralized system architecture; one that keeps data close to its producers. Participating agencies implement their own internet map services, spatial data catalogs, and other geoservices. They receive training and on-line technical support as they operate and maintain these on-line tools. Today more than 120 WMS services, 3,000 digital maps, 18 metadata catalogs and more than 150,000 metadata records can be found through its regional portal (www.geosur.info). The Program has recently expanded its reach to incorporate several geoprocessing initiatives at a regional scale. These include detailed hydropower assessments that allow governments to decide what are the best locations for hydropower plants and early warning systems for disaster prevention and mitigation in South America. These new endeavors are possible due to the increasing volume of spatial datasets that are daily being added to the GeoSUR regional network by participating agencies and international organizations. The GeoSUR Program has been instrumental in making available on the web a vast array of geospatial and environmental datasets that, until now, have scarcely been distributed, and have had insufficient impact on decision making. GeoSUR sets the example for the development of regional SDI Programs in other regions of the developing world.

GeoNode and the World Bank: Bringing SDI Solutions to the Developing World to Enable Disaster Risk Reduction (145)
Eddie Pickle, Ariel Nuñez, Galen Evans

At GSDI 12 in Singapore, OpenGeo introduced the GeoNode open source project — designed to encourage the development and adoption of spatial data infrastructures (see http://www.gsdi.org/gsdiconf/gsdi12/slides/lightning/benthall.pdf). Before GeoNode, the lack of a readily available, affordable, and flexible SDI solution limited the ability of the World Bank and other international and civil society organizations to make decisions regarding economic, social, and environmental planning and development, especially with respect to disaster risk reduction in developing nations. Additionally, the poor availability of spatial data in post-disaster situations like the Haiti earthquake of 2010 or the ongoing famine in East Africa has limited the ability of the international community to respond to those disasters. The Global Facility for Disaster Reduction and Recovery (GFDRR) is a partnership of 38 countries and 7 international organizations (including the World Bank) committed to helping developing countries reduce their vulnerability to natural hazards and adapt to climate change. The partnership’s mission is to mainstream disaster risk reduction and climate change in part by deploying GeoNode spatial data infrastructure solutions in participating countries. GFDRR has and will continue to partner with agencies around the world to deploy local GeoNode instances and leverage existing technological and institutional environments to make spatial data more broadly available for disaster planning and response. GFDRR and OpenGeo have been leaders in the GeoNode community since the project’s founding and are proud to provide an accessible, easy-to-use, low-cost spatial data infrastructure solution. This presentation will describe how the GeoNode project has progressed since its launch at GSDI 12 and how it will continue to benefit SDI building and data sharing across international organizations and national, regional, and local governmental authorities around the world. Specifically, we will discuss how GFDRR uses GeoNode in over two dozen countries to encourage easy online distribution of spatial resources and enables organizations that produce spatial information to make their information publicly accessible without complex proprietary processes or costly infrastructure. We will also discuss future goals for the software in the context of GFDRR’s global efforts. Future GeoNode work aims to supplement the current GeoNode capabilities to provide better sharing and discovery as well as improve the overall usability of the software. Improvement in the following key areas will be required to achieve this outcome: • Review and improvement of the existing user interface in response to user feedback and real-world use cases • Improvement to tools for printing and sharing data and maps • Enhanced collaborative features like groups and portal pages

Organised Spatial Data Interest Community and the Development and Use of Enabling Digital Earth Technologies in Hungary (94)
Gábor Remetey-Fülöpp

Due to the recession and financial crisis in Europe the important role of research and development as well as the need to identify innovative solutions is widely acknowledged in order to ensure improved competitiveness and employment and economic growth. It is increasingly being realised that proper organisation and management of geographical information (GI) can play a signficicant role in addressing these issues. Exploiting the GI related opportunities fully requires input from major sectors of society, government, private sector and community. At the European level the European Union has put in place the INSPIRE Directive which aims to establish a European Spatial Data Infrastructure (SDI) based on SDIs established at the national level. Currently consideration is being given to initiating a process of establishing a European Union Location Framework which will move the agenda beyond INSPIRE in the latter part of the current decade. Sub-national SDIs are increasing playing a role across Europe. The European Umbrella Organisation for Geographic Information (EUROGI) is supporting a data base of such sub-national SDIs and developing the related network. Digital Earth technologies are playing an increasing role in developing these infrastructures. Given its flexibility and capacity for innovation, the private sector, often in conjunction with academic bodies, also has an important role to play. A European Union initiative to promote the re-use of public sector information represents an important inititative in this regard. It is currently under review with the view to strengtherning its operation. Digital Earth technologies and data outputs can provide a strong basis for private sector contributions. Community input through crowd sourcing is growing and will in the future make an increasingly important contribution. This area of innovation and growth will complement the activities of NGOs operating in the GI field. The role of the Hungerian national GI association, HUNAGI, will be discussed with emphais on its role in providing a forum for interested parties (government, academia, industry) to exchange ideas and raising awarens by disseminating information and knowledge on geospatial data related services and enabling Digital Earth technologies. The paper briefly introduces the domestic and international legislative frameworks which are influencing positively the emerging use of GI in priority societal benefit areas. After identifying the key drivers and members of the Spatial Data Interest Community (SDIC), major emphasis will be given to illustrating the developments and applications of Digital Earth technologies in Hungary in the categories spatially enabling government, the spatially enabling industry and spatially enabling citizens. Some achievements in innovation, research and development will be introduced and the important role of scientific institutions emphasized. Recent initiatives related to legal challenges (ie. accessibility and usability of GI), as well as education and capacity building are highlighted. Finally a flashback and short outlook is given from HUNAGI perspective on how the SDI and the Digital Earth communities are - as anticipated -mutually interested to work together.

Multi-view SDI assessment of Kosovo (2007-2010) - Developing a solid base to support SDI strategy development (46)
Bujar Nushi, Bastiaan van Loenen, Joep Crompvoets, Jaap Besemer [paper: refereed book chapter]

This paper presents the Multi-view assessment of SDI status in the Republic of Kosovo performed in 2007 and in 2010. The main objective of this research was to assess the SDI of Kosovo and to define the driving forces needed to support SDI strategy development. The research assesses the status of SDI implementation of Kosovo using SDI readiness Index (Delgado et al., 2005); INSPIRE State of Play (Vandenbroucke et al., 2008) and Maturity Matrix as assessment (Kok en Van Loenen, 2005) approaches. Each approach treats the assessment of SDIs from a different view and context and so with a different purpose in mind. A questionnaire SDI readiness survey was conducted on the SDI stakeholders in Kosovo in 2007 and 2010. The INSPIRE State of Play is assessed for 5 countries Estonia, Lithuania, Latvia, Slovenia and Luxembourg and an attempt to define the State of Play for SDI of Kosovo was also part of the assessment. The last assessment was defining the Maturity matrix for SDIs of Slovenia and Kosovo. This research has led to 6 driving forces selected to support development strategy of SDI at national level in Kosovo.

Spatially enabling government by semantic, scalable and smart Spatial Data Infrastructures for e-Governance (179)
Tatiana Delgado, Rafael Cruz, José Luis Capote, Silvio Hernández

Electronic governance (e-governance) comprises a more fundamental sharing and reorganizing of power across all sectors, whereas spatially enabling government could be interpreted as the provision of more intuitive, scalable and intelligent geospatial services in order to assure e-Governance and facilitation of common relationships in Electronic Government (G2G, G2B and G2C). Within such a context, Spatial Data Infrastructure emerges as an important utility of e-Governance in public administration.
Public administration is currently considered as the heaviest service industry, with a service production distributed in hundreds (even thousands) of partially independent agencies (Wang, et al, 2007). This means that Service Oriented Architecture (SOA) paradigms putting the ”service” notion at the core of development (e.g. Semantic Web Services and Cloud Computing) are particularly suitable for the public administration domain. E-Governance is a process of reform in the way governments work, share information, engage citizen and deliver services to external and internal clients for the benefit of both government and the clients that they serve (IIIT Hyderabad, 2010). Software and techniques as Decision Support Systems (DSS) and Recommender Systems (RS) could be relevant to assist e-Governance and public administration.

Parallel Session 2.2 (Room 204AB)
GEOIDE contributions to SDI research

Sharing data is good, but are we concerned enough about public protection and ethical data dissemination? (99)
Rodolphe Devillers, Marc Gervais, Yvan Bédard, Jennifer Chandler, David Coleman, Elizabeth Judge, Jacynthe Pouliot, Teresa Scassa

Volunteered geographic information as a thematic data resource in data-poor settings: A public health surveillance pilot study (93)

Legal liability concerns surrounding Volunteered Geographic Information applicable to Canada (256)
Andriy Rak, David Coleman, Sue Nichols

Authoritative geographic datasets are the source of accurate and reliable data. The process of acquiring, updating and maintaining such datasets using traditional approaches requires both time and costly resources. An alternative and possibly more economical approach to reliably creating and updating authoritative datasets involves the integration of Volunteered Geographic Information (VGI). Such potential integration of VGI with authoritative datasets raises important legal considerations. Liability is a primary issue that can deter organizations from incorporating VGI into their datasets. Due to the lack of research on this topic, organizations consider it to be a better practice to exclude VGI as a viable option. In the view of benefits that VGI can bring it is important to continue and deepen the research on liability concerns surrounding VGI, so that organizations will not fear to face the legal liability risks that can arise and will be equipped with appropriate techniques to manage such risks. This paper will investigate the liability effects of using VGI under Canadian law. The questions of who is liable and when for VGI provided to authoritative public and private geographic datasets are among the most important questions which impact VGI, and are the ones which this paper aims to address. Liability issues of using VGI are studied by examining the liability in contract, as well as tort. To minimize and/or eliminate liability, in most cases, requires organizations to develop a risk management plan (Martinez, 2003). This paper concludes with liability risk management techniques which, if incorporated properly, provide opportunities to minimize or eliminate the liability. Issues of legal liability arising from creation, distribution and integration of VGI with authoritative datasets have received very limited attention by scholars and researchers at their work. Further research is required in order to overcome shortcomings in the studies concerning legal liability arising from usage of VGI.

Elaborating a Cognitively Enriched Semantic Conceptual Model for Spatial Data Infrastructures to help Blind Pedestrians Navigate in Urban Areas (150)
Reda Yaagoubi, Geoffrey Edwards, Mir-Abolfazl Mostafavi [paper: refereed proceedings article]

Semantic information about the surrounding space plays a fundamental role in various tasks of navigation and wayfinding especially for visually impaired pedestrians. Therefore, a suitable Spatial Data Infrastructure (SDI) for helping the navigation of pedestrians who are blind should provide useful and relevant spatial data semantics to help these individuals to better configure their mental representation of urban areas. The aim of this paper is to propose a design methodology of a spatial semantic database, which is cognitively enriched to help visually impaired pedestrians in their daily navigation activities. The elaborated semantic conceptual model ensures defining an SDI dedicated to improve situation awareness of the blind pedestrian in urban areas. This semantic model has a hierarchical structure, hence providing information about the environment at different levels of detail. In addition, this semantic model can be integrated with the ISO 19133:2005 standard for location based services to extend the capabilities of this standard in ways that are more supportive of the needs of the blind.

Exploring LiDAR data as basis for validating Volunteered Geographic Information (203)
Patrick Adda , David Coleman, Krista Amolins

Spatial data acquisition processes can ensure data quality in terms of spatial accuracy, thematic accuracy and logical consistency - respecting required data structuring rules. Two other important elements of data quality, namely, data completeness (including suitability for intended use) and lineage (data history, updates, and current version properties) have shown to be lacking at later dates when the data is to be used for important decisions. These five elements can generally be used to measure the acceptable spatial data quality in the geographic information. Previous research has shown that Volunteered Geographic Information (VGI) can be used to improve data quality by providing information to complete and update spatial data in real time. The validation of information from VGI sources have been an important area of concern in adopting datasets from human sensors with little or no Geomatics engineering expertise. Although quantitative methods have been developed in the past years to validate contributions, qualitative datasets (base maps/data) required to confirm volunteered information have not been given a matching amount of research attention. In this study, the potential and limitations of using LiDAR data as a base map to validate VGI is discussed using a process called VIBE (Volunteered Information Budgeted Errors). Layers of geographic objects (Points, lines, polygons) from volunteered and open GIS sources including OpenStreetMap, Google Earth/Maps, Wikimapia, Waze, Ovi Maps, Bing Maps, MapQuest and Yahoo maps are mined and intersected with LiDAR point clouds to ascertain their spatial accuracy. Spatial and attribute information of 3D and 2D products produced from the point cloud (buildings, parking lots, walkways, flood simulations, vegetation and atmospheric phenomena) are compared with datasets and attributes from VGI for the same area. Real time, simulated or pre-recorded events are compared against the spatial extents of the LiDAR data. The results obtained from validating these measurements against the five major elements of data quality mentioned above provide a means to quantify spatial data quality of VGI. To further simplify the process of validating the data accuracy from the comparisons, VIBE groups data quality standards into three – namely, Volunteered Data Quality (VDQ), Professional Data Quality (PDQ) and Combined Data Quality (CDQ). PDQ sources its data from providers with Geomatics expertise and is considered in this research to be the most weighted in terms of accuracy. The CDQ is obtained by fusing the VDQ with the PDQ assessments. Early results from CDQs show the possibility to quantitatively validate spatial data accuracy using LiDAR as a base map. This provides numerical evidence to assess the limits to which VGI can either be used independently or in tandem with more authoritative data sets like LiDAR.

Parallel Session 2.3 (Room 2103)
Technical Challenges II

Advances in the project for the elaboration of Chilean norms for geospatial information (122)

SNIT is the National Spatial Data Infrastructure (NSDI) in Chile, and as such, promotes the use of geographic information standards to achieve interoperability in data and geoservices existing in the public institutions of the country. To advance in the mentioned objective, is currently running a national project aimed at obtaining a set of national norms for geographic information, coming from the translation of ISO TC 211 standards. The idea is to achieve a normative identical to the international, but in our language, to serve as technical support for the development of the SDI in Chile. The concept “norm” is used as an equivalent of “standard”, because the national normalization body in Chile denominates as Chilean norm their final product. This is an associative project, coordinated by the Ministry of Property through SNIT Executive Secretariat and conducted by the National Normalization Institute (INN). A number of institutions also participate as associated since everyone has committed a contribution in working hours for the project development. The nineteen ISO standards taking part in this project encompass general standards (19101, 19103, 19104, 19105, 19106) of data models (19109), geographic information management (19110, 19111, 19113, 19114, 19115, 19131), services (19119, 19128, 19142), encoding (19136 and 19139) and for specific thematic areas (19101-2 and 19115-2). The elaboration process of a Chilean norm is performed according to the procedures established by the National Normalization Institute (INN). Within these, a crucial step is to review the draft of the norm through a public consultation whose results are discussed by a technical committee specially formed for this purpose, which includes representatives of public, private and academic sectors. The objective is to generate consensus documents which finally are approved as Chilean norms. Another product of the project is the elaboration of a manual for the practical application of the Chilean norms, along with audio-visual resources to guide the users so that ISO standards are more familiar for them, and also users are able to apply the contents of these standards in the production processes and information management. To date, from 19 Chilean norms considered in the project, a total of 7 have been approved by the INN (19101/ 19101-2/ 19103/ 19104/ 19105/ 19106/ 19109), that is to say they have the character of Chilean norm. Among these, four have been made official by a specific ministry. For 2012 it is planned to develop a strong work on contents dissemination and capacity building in national public agencies, in the perspective of leveraging the best possible the Chilean norms generated by the project.

Airborne infrared hyperspectral mapping for detection of gaseous and solid targets (22)
Jean-Philippe Gagnon, Eldon Puckrin, Caroline Turcotte, Vincent Farley, John Bastedo, Martin Chamberland

Airborne hyperspectral ground mapping is being used in an ever-increasing extent for numerous applications in the military, geology and environmental fields. The different regions of the electromagnetic spectrum help produce information of differing nature. The visible, near-infrared and short-wave infrared radiation (400 nm to 2.5 µm) has been mostly used to analyze reflected solar light, while the mid-wave (3 to 5 µm) and long-wave (8 to 12 µm or thermal) infrared senses the self-emission of molecules directly, enabling the acquisition of data during night time. High resolution imagery in visible and infrared bands provides valuable detection capabilities based on target shapes and temperatures. However, the spectral resolution provided by a hyperspectral imager adds a spectral dimension to the measurements, leading to additional tools for detection and identification of targets, based on their spectral signature. The Telops Hyper-Cam sensor is an imaging spectrometer that enables the spatial and spectral analysis of targets using a single sensor. It is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides datacubes of up to 320x256 pixels at spectral resolutions as fine as 0.25 cm-1. The LWIR version covers the 8 to 11.8 µm spectral range. The Hyper-Cam has been recently used for the first time in two compact airborne platforms: a gyrostabilized gimbal and a belly-mounted gyrostabilized mount. Both platforms are described in this paper, and successful results of high-altitude detection and identification of targets including industrial chemicals and ammonium sulphate are presented.

Design and Implementation of Iran’s National Spatial Data Clearinghouse Network (134)
Peyman Baktash, Ali Javidaneh, Hadi Vaezi, Ali Mansourian, Homa Darzi, Nazila Mohammadi

Iranian National Cartographic Center (NCC) is in charge of developing SDI in Iran, according to the 5th national development plan of the country. In this respect, NCC has started design and development of national spatial data clearinghouse network. The architecture of this network is based on the second generation of clearinghouse networks (Mansourian et al., 2010). Development of National Geoportal as well as motivating individual ministries and national organizations to develop their own catalogue services and metadata repositories as part of this network, are different activities relevant to the development of national spatial data clearinghouse network. This network is being developed based on national standards to facilitate the connection of that with the networks of other countries at the regional/global levels. This paper aims to describe the architecture of the clearinghouse network and activities for the development of that.

Quality Control on the Radiometric Calibration of the WorldView-2 Data (189)
Ahmed Elsharkawy, Mohamed Elhabiby, Naser El-Sheimy

Parallel Session 2.4 (Room 205B)
Legal, Economic and Institutional Challenges I

Quest for a global standard for geo-data licences (45)
Bastiaan van Loenen, Katleen Janssen, Frederika Welle Donker [paper: refereed book chapter]

During its meeting at the GSDI 12 conference in Singapore in October 2010, the Legal and Socio-Economic Working Group of the Global Spatial Data Infrastructure Association felt that the possibilities for a global licensing models for geographic data needed to be examined. The Group believes that the differences between the national traditions and practices with regard to licensing might actually be smaller than generally assumed, entailing that efforts to harmonise these traditions and practices may be worthwhile. The task of looking for harmonisation of licence models was taken up by a number of academics and practitioners, who aim to set some first steps towards a global approach to licensing.

Digital Rights Management and Licensing of Geospatial Data and Services (226)
Roger Longhorn, Rüdiger Gartmann

Establishing licensing frameworks for geospatial data and services, and identifying associated ‘best practice’ or recommended ‘good practice’, have been studied in several EU-funded projects in the past few years as part of the European Union’s initiative to create a pan-European Spatial Data Infrastructure (SDI) via the INSPIRE Directive. The European SDI Best Practice Network (ESDIN) project, OneGeology-Europe and GS-SOIL all have components examining licensing and business models for national government bodies, i.e. the national mapping and cadastral agencies (NMCAs) in ESDIN, national Geological Surveys in OneGeology-Europe, and national soil bureaux in GS-SOIL. Since all geospatial data created, owned or managed by public bodies is also public sector information (PSI), the position paper on the licensing of PSI produced by the Legal Aspects of Public Sector Information (LAPSI) project is also highly relevant, since the European Commission is considering future revisions to the existing Directive on re-use of Public Sector Information. In order to better understand the positions taken by different government agencies, under widely differing data access, use, re-use and cost recovery regimes across Europe, the presentation focuses on a comparison of the findings of these four projects. It looks at the main principles identified in these projects that govern the adoption and implementation of different licensing regimes for different types of government agencies, including legal ramifications resulting from existing national and regional (European) legislation on public sector information. The implementation of a specific licensing regime depends not only on policy and the existing regulatory system of the jurisdiction in which the agency operates, but also upon suitable tools being available for license management, typically linked to access control or other forms of security management. These elements comprise the main components of a digital rights management (DRM) system for protecting access to, and monitoring use of, a data owner’s key asset – their data. To investigate one practical example of a DRM approach, the presentation includes an overview of the licenseManager and securityManager tools available within con terra’s sdi.suite of software, which has been implemented by several regional and national government agencies across Europe. The focus is on identifying how and where such software tools can help implement a specific licensing regime under an acceptable security/access control regime, for licensing frameworks that range from the highly complex to the most simple. Valuable lessons and good practice can also be learned from the practical experience gained in implementing these tools in real world situations with government clients, identifying the challenges encountered in different settings and the solutions employed in achieving practical DRM for an agency.

Spatially enabling Government, Industry and Citizens through People (58)
Steven Ramage, David Coleman

Today, in 2012, there are now hundreds of Spatial Data Infrastructures (SDIs) around the world. Many lessons have been learned regarding the technical challenges and there are SDIs where senior executives are now able to explain the return on investment in this area. However, there are many other SDIs, which are struggling to get off the ground and others where sustainability is an issue. The purpose of this paper is to delve into the human aspects of SDI and set the scene for some much-needed research in this area. Information systems or IT infrastructures in general are meaningless without considering the human aspects, since technology is only a method for facilitating communication. The Master of Arts (MA) program1 specialization “Human Aspects of Information Technology” at Tilburg University in the Netherlands addresses important topics in this regard. This MA falls within the Communication and Information Sciences Master's programme where the initial questions relate to how people communicate naturally: • How do people ask questions, and for what purpose? • What is the meaning of the words that are used and what kind of answers do people expect? • When does miscommunication occur, and how can it be resolved in a natural way? Some additional questions that could be considered for SDI stakeholders are • Why should people share data? • Where are the incentives or motivation to do so? • What are the disincentives or barriers to sharing data? Worldwide, we know that these terms reflect a general trend and common issues. For example, data quality arises as an area where people are unwilling to release or share data because those data may be perceived as being of poor quality. There is a growing body of literature on data quality issues, but the human motivational factors are still the underlying reason for not sharing, irrespective of high or low quality levels. Public-sector accounting practices also need to be revisited to properly account for the benefits accrued across government as a result of formalized data sharing practices. A significant investment of time and resources can be involved in developing and maintaining ongoing data sharing relationships with other organizations. The return on this investment may be unclear or intangible, especially when viewed only from an internal accounting perspective. It can be frustrating for a manager to see the overall cost savings jurisdiction-wide and other organizations reaping the benefits of the shared data sharing when his or her own organization is bearing the associated costs. This is why senior executive-level support is critical to the success of data sharing initiatives in a spatial data infrastructure. Especially in today’s environment where government budgets and services are being reduced around the world, a senior champion is necessary to keep everyone’s eyes focused on the larger prize. It is through this awareness and understanding that we will be able to spatially enable Government, Industry and Citizens.

Spatial Data moving from Media to the “Real World” (88)
Stephen Little

The media are becoming obsessed with maps. Most news articles contain some sort of map • Electric car charge points • Countries debt • Crime mapping • The Olympic torch route Characters within comedy or drama shows are becoming more aware of the importance of maps. One example is television detectives being given ubiquitous access to spatial data on every computer or mobile device. Recipients of news feeds or people watching entertainment have an expectation that the information they are showed on a map is “correct”, additionally they are not aware of the effort required to make this data available and accessible. The reality is due to network connectivity and the transition from paper to digital data the number of spatial data silos is rapidly increasing. Often these silos store the same data, albeit characterised differently: • Different definitions for the same real world object • The same feature attributed differently across different silos • Different exchange mechanisms, formats of media to disseminate the data Users are making decisions about which data to use, based on ease of use or familiarity of the data; as opposed to selecting the data based suited to how it might be used. The delivery of coherent and consistent information across these silos is needed to support decision making across multiple domains nationally and internationally, however achieving this coherency and consistency has long been an issue. Within the UK the Ministry of Defence (MoD) have for a number of years been investigating how to make coherent and consistent information, available and accessible to their users. This has included the development of a reference architecture for geospatial data, to support the delivery of harmonised data to support legacy applications over legacy infrastructures. Additionally, understanding that the challenges to delivering coherent and consistent geospatial data are wider than just deploying the relevant technology; UK MoD have a vision for a Spatial Data Infrastructure (SDI), which addresses a broader set of issues, including: • Governance of the data • Policies which affect use of the data • Standards Once achieved, this vision for an SDI will allow the spatial data, so prevalent in the mass media, to be achieved in real world sitations. This paper will describe some of the constraints to the delivery of spatial data and the work undertaken to date by the UK MoD which has led to the vision for a Spatial Data Infrastructure describing how this vision will be delivered.

Parallel Session 2.5 (Room 2104B)
Basic and Applied Research II

Towards the Assessment of Human Lyme Disease Risk in Peri-Urban Forests by Analysis of Visitors’ Activity Patterns (206)
Hedi Haddad, Franck Manirakiza, Bernard Moulin, Vincent Godard, Christelle Méha, Samuel Mermet [paper: refereed proceedings article]

Lyme disease is transmitted to humans through the bite of ticks which are often found in forests. This disease is an increasing threat for public health, especially in (peri) urban areas where forest spaces offer suitable environments for the establishment of tick colonies. In this work we are interested in the assessment of the risk for visitors to be infected by ticks in the Forêt de Sénart, a forest which is very much used for leisure activities in the periphery of Paris, France. One of the objectives of this work is to collect data about visitors and their behaviors (and habits) when spending time in the forest. More precisely, we aim at identifying visitors’ typical activities (activity patterns) and the places they attend, as well as the trajectories they follow in the forest. The ultimate goal is to understand the visitors’ behaviour patterns that are at risk with respect to areas where infected ticks present a threat. This paper presents an outline of our ongoing work about data collection and analysis towards the extraction of such activity patterns

Spatial Analysis The Effectiveness of Seaweed as A Catalyst for Improving Ecologic and Economic Qualities in Takalar Water Area, South Celebes (214)
Tri Widowati Gatot, Haryo Pramono, Adi Rusmanto, Sri Lestari Munajati

The Development of Takalar water area as a center of seaweed commodity has a dual role, namely as economic source and carbon binder. Cultivation of seaweed with a long line system that continuously grow makes the role of seaweed in the binding of carbon, especially for water areas can’t be ignored. The number of population, infrastructure, and economic society co-developed along with the widespread of seaweed cultivation in Takalar. Kappaphycus Sp. is the most widely cultivated seaweed in Takalar water territorial. The purpose of this research are (a) for analyze and calculate the amount of potential areas for seaweed cultivation in Takalar water area. (b) for measuring the amount of carbon absorbed when the potential areas for seaweed cultivation utilized optimally, and (c) for estimating the likely increase in the local economy if the potential areas for seaweed cultivation utilized optimally. Delineation of potential areas for seaweed cultivation is obtained through spatial analysis and primary data collection in the field with a systematic sampling method. Economic improvement is expected by looking at statistical data and the possibility of seaweed production if the potential areas for seaweed cultivation empowered optimally. The amount of minimal carbon content in Eucheuma Cottonii or commonly known as Kappaphycus Alvarezii in Takalar water area is 20.73 ± 1.73 (%). The results of primary data and image processing show that the vast of potential areas for seaweed cultivation in Takalar is 597.31 km2. Based on the data of total potential area for seaweed cultivation and the large amount of carbon content obtained The number of carbon emission that can be absorbed is equal to 71,531,381.82 to 120,578,542.70 Ton C / Cycle Plant. The increasing of GDP from the agricultural sector if the cultivation of seaweed in Takalar has been optimized is 28%.

Adding Semantics To Spatial Content: A Land Cover Scenario (218)
Mariana Belgiu, Josef Strobl, Manfred Mittlboeck

For successful implementation of geospatial information sharing platforms, we need solutions for overcoming syntactic and semantic ‘noise’ that may occur while sharing spatial data across communities, enterprises and application domains. For example, querying existing spatial data repositories is commonly based on thematic, spatial and temporal criteria. Semantic heterogeneity problems caused by ambiguity of thematic keywords pose challenges for discovering appropriate spatial information. Over the last years, ontologies have been identified as solutions to add semantics to (spatial) content and an increasing number of ontology-based specifications of domain knowledge (domain ontologies) are available. Unfortunately, most existing domain knowledge bases operate as standalone solutions for overcoming semantic heterogeneity among disparate spatial data sources. This paper suggests a modularized ontology-based framework flexible enough to integrate existing knowledge bases. The framework utilizes lightweight ontologies for explicit specification of domain conceptualization and maps domain concepts against related concepts defined in other formalized concepts schemas. SKOS (Simple Knowledge Organization System) vocabulary elements are used to specify both hierarchical (general/specific or broader/narrower relations) and associative relations between concepts defined in different Knowledge Organization Systems (consistent schemata). Developed prototype framework has been applied to land cover datasets, as important spatial assets for environmental, societal and economic analyses. The proposed approach is considered an important building block in improving land cover dataset integration and sharing. Challenges associated with domain ontology development are also discussed.

Zoonosis-MAGS: A generic multi-level geosimulation tool for zoonosis propagation (208)
Mondher Bouden, Bernard Moulin

Several approaches have been proposed to model and simulate the spread of infectious diseases such as the West Nile virus (WNV) and Lyme disease. However, these approaches such as mathematical models, cellular automata and traditional multi-agent systems have some weaknesses when trying to model and simulate the influence of geographic and climatic features on the disease spread. In this context, we propose to apply a multi-level geosimulation approach to remedy some shortcomings of current methods. Using such an approach we developed the architecture of a new generic tool called Zoonosis-MAGS. This tool will allow public health decision policy makers to understand and estimate the magnitude of the evolution of an infectious disease in a large territory. It also can help them make informed decisions by assessing the consequences of different intervention scenarios. Besides, we developed a new theoretical model called MASTIM which is used to specify the spatio-temporal interactions of various kinds of actors (e.g. mosquitoes, ticks, birds, mammals, etc.) involved in the zoonosis propagation.

Parallel Session 2.6 (Room 208AB)
TECTERRA Workshop: Supporting Canadian Geomatics Technology Development and Commercialization
See the workshop schedule

Parallel Session 2.7 (Room 2104A)
Mapping Information Branch NRCan

The Vision of MIB in putting the Canadian Geomatics Action Plan in place / La vision de la DIC dans le contexte de la mise en œuvre du plan pancanadien d’actions en géomatique
Éric Loubier

La Direction de l’information cartographique (DIC) s'efforce de répondre aux besoins de la communauté géomatique canadienne dans le contexte d'une économie basée sur la connaissance, qui repose sur l’accès facile et rapide de l’information numérique. Notre objectif principal consiste à fournir l’intégration et l’accès aux données géospatiales qui répondent aux exigences du gouvernement, des universités, des organisations professionnelles, et des utilisateurs individuels. En collaboration avec la communauté géomatique canadienne, la DIC a établi une liste de priorités et d’action appelée le plan pancanadien d’actions
en géomatique. Celui-ci a permis de façonner les projets prioritaires de la communauté géomatique canadienne et de la DIC. Dans ce plan d'action, on distingue deux défis majeurs dont celui de démontrer la valeur de la géolocalisation des connaissances comme un outil essentiel dans la politique et la prise de décision et l’autre de travailler main dans la main avec une variété d'intervenants et de rendre les données géospatiales accessibles dans une infrastructure commune et partagée.

Flexible treatment of vector data: The RefVec Project / Gestion flexible des données vectorielles: le projet RefVec
Jean-Marc Prévost

Le projet RefVec contribue à l'atteinte de l'objectif d'intégration et d'accès aux données géospatiales de la DIC. Sa stratégie est basée sur trois piliers: Catalogue d'entités, Métadonnées et base de données flexible. Le catalogue d'entité, répondant aux normes internationales, permet de normaliser et ainsi de faciliter le partage des données de la DIC avec les partenaires, clients, etc. Un modèle de métadonnée lié aux entités facilite la gestion, la collecte et la distribution d'information. Le modèle flexible de la base de données permet d'intégrer, à l'intérieur d'un même dépôt, les données provenant de diverses sources aux caractéristiques très diversifiées (modèle de données hétérogène, multi-échelle, etc.). De plus, la base de données pourra accepter des données géospatiales sans découpage spécifique (seamless). Un mécanisme interne de fragmentation des grandes géométries permettra de réduire la taille de ces entités pour faciliter la gestion (ex: mise-à-jour) de cette information. L’algorithme de fragmentées d’être reconstruites de façon identique à leurs formes originales. Le processus de fragmentation est transparent aux usagers de la base de données tout en permettant de gérer l’information géospatiales sans égard à sa dimension.

Modernization of the Management Model for Elevation Data / Modernisation du modèle de gestion des données d’élévation
Nouri Sabo

Au Canada, les données nationales d’élévation sont gérées et servies gratuitement au public sous forme de fichiers (flat files) via le portail GeoBase. Malheureusement, la gestion et la distribution des données d’élévation sous cette forme est très limitative (ex. projection et format fixes) et pose de sérieux problèmes (ex. problème de continuité). Pour gérer et distribuer efficacement ces données, Ressources Naturelles Canada a initié un projet appelé Grille géospatiale. Dans le cadre de ce projet une nouvelle structure de données appelée GeohashTree, capable de gérer différents types de données d'élévation, à différentes résolutions a été développée. En plus de faciliter la gestion des données d’élévation, cette structure permet de considérablement réduire l’espace de stockage des données tout en permettant une interaction spatiale avec d'autres données.

Access and distribution: A single (common) geospatial portal / Accès et distribution: le portail géospatial unique
Yvan Désy

This presentation will provide an overview for the Open Geospatial Platform being delivered through the Federal Committee on Geomatics and Earth Observation by NRCan, Earth Sciences Sector. This presentation will focus on work to date on the technical architecture, service offerings, and application programming interface, the latter of which will establish a standards based approach building interoperability (through protocols and standards) among different technologies and programming languages. This leading-edge application will enable Canadians to access geospatial tools, data, and services 24/7/365 and follow the maxim of build once, use many times -- a principle enshrined in the need for fiscal prudence and effective program delivery.

Parallel Session 3.1 (Room 205A)
Spatially Enabling Government III

Development of an Arctic SDI in support of Arctic Council requirements (257)
Douglas Nebert

The Arctic Council approved in 2010 the creation of an Arctic Spatial Data Infrastructure Initiative to support its various programmes of work. The eight circumpolar nations have convened a technical working group to define the requirements and deploy national and international data and map services to meet base mapping needs at a large scale (1:250,000 - 1:500,000), but done with a consistent polar projection and symbology. The primary audiences of the Arctic SDI would be scientists and planners involved in Arctic Council projects who would use the system as a base to publish and explore additional arctic data in its proper context. This paper will present the current state and plans for the execution of the Arctic SDI.

Efforts toward the development of Global Map (269)
Yoshikazu Fukushima, Takayuki Nakamura, Tsutomu Otsuka, Takeshi Iimura, Noriko Kishimoto, Taro Ubukawa, Kiyoaki Nakaminami, Yusuke Motojima, Masaki Suga

The Global Mapping Project aims to develop Global Map, which is basic geospatial information of the whole globe, through international cooperation of National Mapping Organizations around the world. Global Map Version 2, based on the new Global Map Specifications adopted in 2009, will be completed in 2012. To facilitate the development of vector data of Global Map Version 2, Global Map Data Check Software (GMDC), Metadata Editor, Manual for Development and Revision of Global Map have been developed by the ISCGM Secretariat and were delivered to National Mapping Organizations (NMOs). Concerning raster data, Working Group 4 in charge of raster data development is making effort to develop Global Land Cover and Vegetation data in collaboration with NMOs. New raster data will be in 500 m resolution with better accuracy than Version 1 in 1 km resolution.

Case study of a spatially enabled government application using INSPIRE SDI standards (97)
Dirk Frigne

This paper presents the development, deploying and usage of a state SDI, built for the Flemish region, one of the 3 geographical and political regions of Belgium, Europe. The case is particular interesting because it is an example of the use of the available SDI services available in the region and bundling them together in a generic application that is used by different stakeholders. Starting from a short contextual situation, the project will be demonstrated from a user perspective with a live demonstration. Flanders is a region with many Small & Medium seized business (SMB). Until early 2010 there was no map available with the exact location of all these companies. Especially (local) governments had no precise idea about the number of companies and their exact location. The goal of the website is to create a map of all companies, industrial areas etc.in the regions of Flanders and Brussels. With this project citizens, government officials and business people can get a clear view of economic activities in their region. The project, also known as the Magda Geo Platform [Raes L. (2011)], will be used as an example to discuss the different aspects that are taken into consideration such as technological and economical aspects, social/institutional/organizational and political/legal issues. The overall objective of the project is to create a common platform that can act as an integration platform for the different available SDI services and where new actors can have access to participate. The project illustrates how different SDI data sources can be combined into an application environment, using open source, open data and a combination of governmental and open data. The different available open datasets are combined into a generic application that illustrates the re-use of available governmental data in an original way. The application is an excellent illustration of a scalable web GIS application that can be used as a framework for further extensions in- and outside the Flemish region. The resulting application is available under an AGPL open source license. A new community is in the process of being formed as members of the Geomajas community (www.geomajas.org) from the US, Nigeria, India and the Netherlands have showed their interest to adapt the data sources to their local layers for use in a different location. As these activities are still very premature at the writing of this abstract, an actualized state of the community will be presented at the conference. During the presentation we will particularly focus on the usabilty and design aspects of the application. How the application is organized and how it can be used as a tool for business people and citizens. The project will be discussed from the beginning until the actual use after being a couple of months in public use. The deployment is scheduled for early Q1 2012. Some sample previews can already been viewed on the website of Geomajas. (http://www.geomajas.org/cases)

Multi-Agent Knowledge Oriented Cyberinfrastructure for Spatially Enabled Society (101)
Chihhong Sun, Chinte Jung, Minfang Lien

This paper aims to develop a framework, called multi-agent knowledge oriented cyberinfrastructure (MAKOCI), which integrates multi-agent system and ontology technology to deal with semantic discovering and sharing of geographic information (GI) services and knowledge to create a spatially enabled society where geospatial information are used widely by businesses and processes to encourage creativity, product development, improve efficiency and effectiveness, and be used as innovator and enabler. MAKOCI will integrate government geospatial information and academia research results into a platform where geospatial applications can be easily developed and register in MAKOCI by geospatial experts. Users of geospatial information can then find and use the applications developed by geospatial experts in the MAKOCI portal easily. MAKOCI includes three subsystems which are (1) a GIS App Store where geospatial web service providers can publish their GI services based on the ontologies and web service consumers can discover the registered geospatial applications assisted by multi-agent system; (2) an ontology editor (ONTOEDIT) where domain experts can edit and manage the knowledge by editing ontologies; and (3) an intelligent spatial decision support system platform (iSDSS) where user and developers can compose GI services into a executable workflow to support decision making.

On the Road to Spatially Enabled Government: Case Study Croatia (180)
Tomislav Ciceli, Lejerka Rasic, Zeljko Hecimovic

pact on NSDI are briefly described. The main current activities concerning NSDI are drafting a new NSDI law and in technical aspect establishing of national geoportal. The first law defining NSDI in Croatia came into force in 2007. It was a big step at that time, but in the meantime the need for a new law, updated with revised strategy and fully in line with INSPIRE directive, should be implemented. The new NSDI law is being written, and like future European Union (EU) member state, Croatia needs to implement INSPIRE directive till joining EU (July 2013). As it was mentioned before, the second important activity is establishing of national geoportal. The current situation shows existence of several data viewers of spatial data established by different institutions, but there is a need to establish a common access point to discover and further use the spatial data. Services offered by State geodetic administration, geoportal, gazetteer of geographical names, CROPOS (Croatian Positioning System), e-cadastre, the Land Parcel Identification System by Agency for Payments in Agriculture, Fisheries and Rural Development, Protected Areas Management System by Ministry of Culture as well as Croatian Mine Information System established by Croatian Mine Center, are presented. The final goal will be to connect relevant services of spatial data through national geoportal and in line with the law defining NSDI in Croatia.

Parallel Session 3.2 (Room 204AB)
GEOIDE Book session: Added Value of Scientific Networking (I)

The Added Value of Scientific Networking: Assessment by the GEOIDE Network Participants 1998-2012; Book Overview
Monica Wachowicz

Does it make any difference to organize multi-disciplinary, multi-institutional projects? What is the value-added by the network form of collaboration? These questions are frequently asked but rarely answered.
Over the past fourteen years, the GEOIDE Network (Geomatics for Informed Decisions) has mobilized 84Min research effort across Canada. All participants in the Network over this period have a story to tell; and this book is the collection of these stories. Participants in the network have collaborated to contribute their viewpoints and their assessments. Some come from long-term participants whose activity spans the whole fourteen-year history, while others come from one particular moment, at the start or the end. The GEOIDE Students’ Network (GSN) and the GEOIDE Summer School (GSS) – History and lessons learned from thirteen years of students’ networking in Canada Rodolphe Devillers Trisalyn Nelson, and Steve Liang Over its existence, the GEOIDE Network has contributed to the training of about 1400 students that now compose a significant part of the new generation of geomatics professionals and scientists working in Canada and abroad. From its start, GEOIDE recognized the need to create a network within the network that could improve students’ training and professional skills through collaborations across Canada. This chapter presents, through the history of the GEOIDE Students Network (GSN), the challenges of developing such broad interdisciplinary and bilingual network in a large country like Canada. We discuss the impact that leadership, communication tools and face-to-face meetings can have on the success of such network, and look at the synergy that existed between the GSN and its sister initiative, the annual GEOIDE Summer School (GSS). From this experience, we draw a number of recommendations that can be used by other organizations that would like to create and benefit from such network. Twelve Years of Geoide-Sponsored Research and Development on Multi-Agent and Population-Based Geo-Simulations for Decision Support Bernard Moulin This chapter provides an historical view of twelve years of research on multi-agent geo-simulations (MAGS) for decision support which was applied to a variety of domains such as the design of parks, crowd simulation, the simulation of customer visits in shopping malls, the control of wild fire spread, the simulation of the interactions of insect and animal populations for the spread of West Nile Virus and of Lyme disease. This chapter tells the ‘inner story’ of these 12 years of research which, in retrospect appear as a complete and articulated research program on MAGS for decision support. It presents the main milestones of this program and emphasizes how the GEOIDE Network gave us opportunities to team up with industrial and governmental partners and different Canadian and international research teams in a series of projects, PADI-Simul, MAGS, MUSCAMAGS and CODIGEOSIM, and a constellation of companion projects. Geomatics and the Challenge of Bridging the Two Cultures Kevin Schwartzman, Paul Brassard, Jason Gilliland, François Dufaux, Kevin Henry, David Buckeridge, and Sherry Olson From an opportunistic venture initiated in the first phase of GEOIDE funding (2000–2002) emerged a twelve-year collaboration – ramified and open-ended – generating research approaches and GIS applications in History and in Health. From the experience the authors argue that the professional environment for scientific networking has changed little in 12 years, but suggest some “conversational” strategies for throwing bridges across disciplinary divides. Opportunities and Challenges in Collaborative Training Environments Charmaine Dean, W. John Braun, David Martell, Douglas Woolford This paper describes our experience in conducting interdisciplinary collaborative research within our GEOIDE research network. We begin by listing factors that we feel contributed to our ability to carry out research in an interdisciplinary environment, noting impacts on both the students and other researchers involved in the project. Challenges arising from cross-institutional, cross-disciplinary research are described next. We conclude with a list of some of the successful outcomes of this collaborative experiment. Parallel Session 3.3 (Room 2103) Technical Challenges III Semantic Enhancement of gazetteers with Feature-Feature and Feature Part-whole Relationships Kate Beard Gazetteers serve the valuable role of supporting geospatial searches by linking placenames to geographic coordinates, supporting several name variants such as local vernacular, multilingual, and temporal variants. Most gazetteers also associate a feature with a standard feature type. Recent research has suggested that gazetteers can benefit from semantic enhancements with one enhancement being more formal specification of feature types to improve semantic interoperability among gazetteers. Gazetteers can also be semantically enhanced by more formal specification of relations among feature types. Currently gazetteers do not support queries on relationships between features or features parts. For example a gazetteer cannot currently be queried for the bodies of water connected by a canal, the countries a canal runs through or separates, the tributaries of a river, the beaches along a coastline, peaks within a mountain range, the bays within a gulf, or the rivers that run into a gulf. ISO 19112 allows parent- child relations between features as thesaurus broad term narrow term relationships. This presentation will discuss how these relationships can be semantically enhanced through ontologies of feature- feature and feature part-whole relationships. Semantic metadata in SDI for decision support systems in spatial planning (162) Jaromar Lukowicz, Iwona Kaczmarek, Adam Iwaniak Standardization process associated with the construction of spatial data infrastructure (SDI) is focused on data modelling, resources' description and tools sharing various geospatial resources. The current SDI architecture is effective for systems with a simple and well defined structure, e.g. geodetic data. In this scope the SDI is a useful reference resource base. Spatial planning data, including planning documents and data used in their creation are not of a simple structure. It comprises information obtained from the description of the spatial transformation processes, prognoses of future changes as well as the results which constitute a basis for the spatial planning policy of public authorities. Therefore, spatial planning issues and document recording methods tend to be problematic in terms of sharing in the SDI. Another problem is using other SDI resources as an input information for spatial planning. Data processing for planning purposes requires advanced object aggregation and generalization, needing tools for searching individual objects, assessing their values and processing them. Delivering this functionality is difficult in the current SDI because of the adopted metadata model. The authors present their idea to complement the metadata system used in the SDI. Currently metadata are organized in profiles which constitute a set of elements describing the features of a data series or datasets. This system provides no description of individual resource objects. Therefore, it is difficult to select the appropriate data objects, verify them and perform automated analyses and design or decision procedures. Identifying specific objects and their attributes requires reflecting the logical structure of data. A metadata scheme definition has to assume a meta-model for the area of interest. Metadata profiles in the SDI do not have such features, and so they provide neither a description of individual objects nor any information about relations between objects and about their attributes. The authors propose to enhance the SDI resource description with metadata obtained through the semantic web technology. It contains tools in the form of ontologies (e.g. the OWL language), RDF graphs and a thesaurus (in SKOS). Ontologies allow for defining a metadata logical model which reflects the data structure. It is possible due to defining the object class and attribute hierarchy, and then associating them to each other by defining predicates (properties). An ontology would satisfy the need for a meta-model explaining structure of objects and their predicates, describing how to fulfil users' requirements and how users can utilize available resources. Since the OWL language, being based on description logic (DL), offers only limited capabilities, it is advisable to use rule-based languages such as SWRL for analysing the resources described using ontologies. In that case every procedure (spatial analysis, the design process, administrative decisions) would be equipped with a defined set of rules. They would contain criteria for testing requirement fulfilment for a specified outcome (e.g. planning permission). Data resources defined in that manner would be searchable with regard to meeting some criteria specific to a chosen purpose, supplying invaluable information for planners, authorities and investors. Object-based classification of traffic and built-up areas using a stereo pair of GeoEye-1 imagery (234) Bahram Salehi, Yun Zhang, Ming Zhong Classification of traffic and built-up areas using very high spatial resolution (VHR) imagery is a challenging task because of the spectral similarity of these land cover types. A method for addressing this problem is to integrate height information, in the form of digital surface model (DSM), with VHR imagery in the classification process. A number of researches have combined LiDAR-derived DSM with VHR imagery for classification of urban areas. LiDAR data, however, is expensive and not readily available for many urban areas. Fortunately, most of the VHR satellites such as GeoEye-1 have the capability of stereo imaging and thus a DSM can be generated using the stereo images. Furthermore, utilizing object-based image classification compensates the possible mis-registration between the VHR image and the DSM. In this study a pair of GeoEye-1 imagery were used for the classification of built-up and traffic areas in a complex urban environment. A DSM was first generated using the stereo images and the rational polynomial coefficients (RPC) of the satellite. Then, a rule-based object-based classification framework was developed, employing both the DSM and the image, to classify the area. In the rule-set different spectral and spatial characteristics of the image objects including morphological, textural, and contextual properties were utilized. The visual inspection of the results, in this stage of the research, shows the very good performance of the method in separating these two land cover classes. This demonstrates the very high potential of VHR stereo imagery coupled with object-based image analysis for detailed mapping of urban environments. A User-centered Multicriteria Spatial Decision Support System for Participatory Decision Making: An Ontology-based Approach (132) Mohammadreza Jelokhani-Niaraki, Jacek Malczewski [paper: refereed proceedings article] Integration of GIS and Multicriteria Decision Analysis (MCDA) in the Web environment has recently gained much attention in participatory (collaborative) spatial decision making. This research trend has focused on the development and use of Web-based collaborative Multicriteria Spatial Decision Support Systems (MC-SDSS). The systems are often based on the views of the experts (planning practitioners, analysts) and provide only generic spatial MCDA (SMCDA) methods for participatory decision making. Furthermore, they do not provide the participants with a choice of their own criteria, alternatives, and preferences. This research seeks to alleviate this shortcoming by proposing a user-centered MC-SDSS. The proposed approach structures the SMCDA elements using the formalism of ontological knowledge representation. The system provides the users with an adaptive and customizable participatory Web platform. It enables the participants to modify the pre-defined SMCDA model according to their preferences. Each user/participant can define his/her decision alternatives, constraints, evaluation criteria (the criterion weights), and generate his/her solutions to a given decision problem. The individual solutions can be aggregated into a collective/group solution. The paper presents a prototype implementation of the ontological approach to MC-SDSS. The approach is illustrated with a parking site selection problem in the City of Tehran, Iran. Smart Web Editing and Workflow Optimization (272) David Monaghan, Desmond Khor Working in multi-disciplinary environments introduces complex requirements and challenges that many conventional GIS solutions cannot support. While all users in an organization may require access to common data, access to specific records may vary depending on department, role, or geographic jurisdiction. User access may also vary as responsibilities change over the course of a project lifecycle. You need a solution that meets the needs of multidisciplinary organizations such as Departments of Transportation or municipal government, with highly configurable rules and a workflow engine that enables the implementation of dynamic life-cycle workflows, feature-level access control, data validation and behavior, and integration to other systems. Discover the breadth of organizations that have deployed such a solution, from municipalities, through transportation and utility infrastructure operators, to government emergency management agencies. Parallel Session 3.4 (Room 205B) Legal, Economic and Institutional Challenges II Being Open - Victorian Government Experience in Open Source and Standards (260) Denise McKenzie The introduction of the Web 2.0 environment has quickly swept in not only a new realm of technology, but it has introduced a number of new players into the spatial market place. This presents many opportunities for Government users, but also requires careful consideration of the benefits versus the risks of adoption of these new services, the technologies that support them and also the companies who provide them. The very nature of the role that Government plays in the development of spatial capability is also being changed and challenged by these new technologies. The growth in Open Source software development has also soured in the past 5 years and has moved from being a specialist niche to a mature competitor to many traditional proprietary spatial products. The State Government of Victorian has been exploring these new technologies with two key projects. Vicmap API and VicGIS. The Vicmap API project was initiated in response to the Google decision to remove PSMA licensed data in late 2010, leaving a need for a map API to be used in government websites and webservices that contains both accurate and regularly updated spatial data. The VicGIS project has been a collaborative venture between 4 Victorian government departments exploring the use of Open Source technology for the development of GIS capabilities that is easily consumed and utilised by non-specialist users, with a centralised catalog of services and data storage. Traditional knowledge and “volunteered” geographic information: digital cartography in the Canadian North (14) Teresa Scassa, Fraser Taylor, Nate Engler Digital cartography offers exciting opportunities for recording indigenous knowledge, particularly in contexts where a people’s relationship to the land has high cultural significance. Canada’s north offers a useful case study of both the opportunities and challenges of such projects. Through the Geomatics and Cartographic Research Centre at Carleton University, Inuit peoples have been invited to become partners in the creation of innovative digital atlases. Examples include creating atlases of traditional place names, recording the patterns and movement of sea ice, and recording previously uncharted and often shifting routes traditionally used over ice and tundra. Such projects have generated interest in local communities because of their potential to record and preserve traditional knowledge, and because they offer an attractive visual and multi-media interface that can address linguistic and cultural concerns. However, given the growing interest by corporations in the natural resource riches of the Arctic and the concomitant rise in government concern over claims to Arctic sovereignty such maps may be of interest to a broad range of actors and for a variety of purposes. Because these projects rely heavily upon, and record, oral knowledge, and because they convert such knowledge into highly malleable and easily disseminated digital content, they raise challenging issues around informed consent, intellectual and cultural property, and privacy. This paper identifies and examines these issues, and describes a collaborative and interdisciplinary research project established to identify and address the use of traditional knowledge in digital cartography. Free Data Now! Okay, but how? (137) Frederika Welle Donker In the last year, there is a growing trend for European governments to make their datasets available as open data. Examples are the British data.gov.uk portal, the Dutch data.overheid.nl portal and the recent EU tender for a data.gov.eu portal (European Commission, 2011). There are five main drivers for this current trend. Firstly, technology, such as web 2.0 and cloud computing, allows better opportunities to disseminate large amounts of data. Secondly, there are legal obligations, for instance, the Aarhus Convention obliges governments to provide free access to its environmental information. Thirdly, there is also a moral obligation to facilitate citizens’ participation in democratic processes. Publishing public sector information promotes transparency and accountability of governments (Shadbolt, 2010). Citizens are encouraged to become more involved with their local government through crowd surfing (TNO, 2011). By providing feedback, citizens can improve the quality of incomplete or incorrect data. Fourthly, making Open Data available improves the efficiency and effectiveness of governments. Public service improves and transaction costs are lower if less time is spent on handling individual data requests. In addition, sharing data can promote better policies, facilitate social reform and build smarter governments (Saxby, 2011). Finally yet importantly, society and the economy will benefit as Open Data is reused for innovative applications and services. However, making data available as Open Data may not be as simple as it seems. How does a public sector body decide which data to publish as open data and which not? This paper focuses on the practical issues of Open Data as the role of governments is changing from public sector information holders to information facilitators. From a policy analysis perspective, we investigate the process of changing from cost recovery policy to open access policy and the role national government can play. Next, legal aspects of Open Data, such as obligations and potential barriers, are discussed. Subsequently, organisational aspects, such as formats and portals, are considered. An important issue is the question how pro-active a government body should be in disseminating Open Data, as raw data or adapted to suit the market. We will address financial aspects of a data policy change. Is there a way of quantifying the efficiency and economic gains? This paper applies the described aspects to a case study of a Dutch government agency intending to change to an Open Data policy. The analysis demonstrates how the process works in practice and how obstacles may come from unexpected quarters. Although there are many benefits of Open Data, it requires serious consideration before opening a floodgate. Bibliography European Commission (2011). Call for tender: implementation of European Commission Open Data portal - SMART 2011/0050 (deadline: 19 September 2011). Saxby, S. (2011). "Three years in the life of UK national information policy - the politics and process of policy development " International Journal of Private Law vol. 4 (no. 1): p. 1-31. Shadbolt, N. (2010). Towards a pan EU data portal - data.gov.eu, European Commission: 39. TNO (2011). Open Overheid. Internationale beleidsanalyse en aanbevelingen voor Nederlands beleid. T. v. d. Broek, N. Huijboom, A. v. d. Plas, B. Kottering and W. Hofman. Delft, TNO: 90. Brave new open data world? (138) Stefan Kulk, Bastiaan van Loenen [paper: refereed IJSDIR article] There is a growing tendency to release all sorts of data on the Internet. The greater availability of interoperable public data catalyses secondary use of such data, which leads to growth of information industries and better government transparency. Open data policies may at the same time be in conflict with the individual’s right to information privacy as protected by the EU Privacy Directive. This directive sets rules to the processing of personal data. Technological developments and the increasing amount of publicly available data are, however, blurring the lines between non-personal and personal data. Open data does not seem to be personal data on first glance because it is anonimysed or aggregated. However, it may become personal data by combining it with other publicly available data. In this article, we argue that these developments extend the reach of EU privacy regulation to open data and may obstruct the implementation of open data policies in the EU. Parallel Session 3.5 (Room 2104B) Basic and Applied Research III Towards an Undistorted Global Web Map Visualization: A new Voronoï-Icosahedron Tessellation Approach on the Ellipsoid (199) Reda Yaagoubi, Mir-Abolfazl Mostafavi Nowadays, the use of web mapping is growing rapidly thanks to the development of several geospatial visualization tools that are open to general public. The most important web visualization tools (such us Google Maps and Bing) use Web Mercator Projection (WMP). Moreover, the tessellation method used with this projection system is based on quadrilaterals that correspond to longitude and latitude on a spherical representation of the Earth. Therefore, the use of the WMP system with this kind of quadrilateral tessellation on the sphere will generate large systemic distortions particularly near or at polar regions. Hence, the WMP and similar projection systems prove to be very limited in those regions, especially if professional and semi-skilled workers need to visualize geospatial information precisely and accurately. In order to efficiently and effectively portray geospatial information, it is necessary to define and develop an improved worldwide web mapping system that is ellipsoid-based, multi-resolution, seamless, low distortion, consistent and multipurpose. To achieve this goal, it is needed to use a tessellation method that allows storing and presenting pre-rendered geographic information, while maintaining undistorted visualization regardless the location, the type and the level of detail of this geographic information to be viewed. In this work, we present a novel tessellation approach to overcome the weakness of existing tessellation methods used in web mapping. Our proposed approach combines an Icosahedron tessellation with the corresponding Voronoï diagram. On one hand, the Icosahedron tessellation produces triangle faces that can be subdivided reclusively, hence generating a hierarchical structure based on very nearly equilateral triangles whatever the position on the ellipsoid. On the other hand, the Voronoï diagram, corresponding to each level of detail of the Icosahedron tessellation is also characterized by its seamless global coverage. This Voronoï-Icosahedron tessellation is powerful for indexing both raster data (such as remote sensing images and aerial photos) and vector data. Furthermore, it manages in effective manner the topological relations among objected to be displayed. Hence, this novel tessellation approach will guarantee a consistent geospatial data visualization on the web and across the globe. Also, it will also support displaying both raster and vector data in clear and concise manner to all users. Thereafter, raster and vector data indexed by Voronoï-Icosahedron tessellation have to be projected by using a local perspective projection system that will contribute to undistorted visualization whatever the location of the geographic data on the globe. Shape Based Map Query Using Undirected Graphs (192) Adel Moussa, Naser El-Sheimy The past decade witnessed increasing interest in spatial information systems that can efficiently incorporate and manipulate the spatial component of information. Based on the application, these information systems exhibit different levels of spatial coverage, referencing, addressed details and accuracy. The successful integration between these different representations is a key enabling factor for benefit maximization. In this research, we propose an algorithm for querying a map to find another map based on the shape of the enclosed objects and the relative spatial relationships between these objects when no common reference system for both maps exists. Among the challenges facing this search process is the fact that objects correspondence is not guaranteed as both maps may miss some object or have extra objects than the other. The different accuracy of the two maps is another obstacle toward the successful search, as exact matching cannot be expected under this condition. The proposed algorithm uses a rotation-invariant harmonic representation of the object boundaries to describe each object. A fixed number of the low frequency harmonics are used as object descriptors while the rest of the higher harmonics are ignored to emphasis on the main shape characteristics and limit the effect of the different accuracies of maps. The objects of the two maps are matched based on their descriptors to find the correspondences that exceed a matching threshold and given initial matching score. The matched objects that have fewer matching objects are given a higher rank. For these individual matches to contribute into a global match, an undirected fully connected graph is formed to represent the relative spatial distance between these objects. The nodes of this graph are the matched objects and the links represent the distance between the object's centers. The two-vertex cliques of these graph is matched with the objects of the other map starting from the higher ranked objects to have successful edge matches with higher matching score. Then the algorithm continues matching the higher vertex cliques until no increase in matching score is achieved. The query result is the match with the highest matching score. The algorithm is tested using a data set of an area of Calgary city where different modified maps are queried against each other. The results achieved show the significance of the proposed algorithm under different conditions. The Interoperability Challenge of using Spatial Data in Mathematical Simulation Models (220) Markus Prossegger, Peter Bachhiesl The Interoperability Challenge of using Spatial Data in Mathematical Simulation Models Markus Prossegger and Peter Bachhiesl Carinthia University of Applied Sciences Dept. of Network Engineering and Communication Klagenfurt, Austria m.prossegger@cuas.at Abstract I. Objective of the paper The mathematical models of state-of-the-art network optimization and simulation techniques are based on network graphs, consisting of vertices (ie. points-of-interest) and pair wise joining edges between them. This work is about the missing link between the real-world and the mathematical modeling – the weighted network graph based on spatial data. The graph is generated using spatial polygon-data, describing the land use of an area on the one hand and spatial line- and point-objects, describing existing infrastructure on the other hand. Based on this spatial data, originating from a number of hybrid sources, a rule-based expert system is used to construct a network graph as vital input in subsequent mathematical models. We focus on optimization models within the scope of telecommunication network construction. The models are intended to minimize network construction costs (including underground work and cable laying costs) as well as maximize the number of customers that can be connected to the communication network infrastructure. An instance of such a model using weighted graphs, is the simulation and optimization engine of fiber optic communication networks described in (Bachhiesl et al, 2004). Here the land use polygons are used to generate a graph originating from a cost raster describing the underground construction costs. While using a cost raster is a feasible way to generate a network graph representing network construction costs, a more sophisticated approach is needed to generate graphs, considering all kind of real-world information and to be able to be computed in a reasonable time. The present paper deals with the graph generation using spatial and regulatory data to allow subsequent network optimization. II. Details of the approach The approach requires the availability of spatial polygon-data describing the land use of the selected area. Each polygon covers a specific area and has a designated land use specified in its attributes. In this paper we use spatial data out of the Austrian digital cadaster map, which consists of polygons, describing different land uses in urban, suburban, rural, and provincial areas. Although this official map is revised and updated periodically, there are still a lot of topological errors therein. Using the semi-automatic identification and correction technique described in (Prossegger et al, 2009 and 2011), the polygon data is validated or corrected in a way to be applicable to this approach. Based on these polygons an initial graph, mainly describing the borders of the land uses, is created. The initial graph is then revised and updated by the existing infrastructure, which can be existing copper or fiber-optic cables, usable (empty) ducts, leased lines and infrastructure to be encoded as vertices (for example: vertical tunnels or masts of open lines). Using the expert knowledge of network constructors, encoded in our rule-based system, the revised graph is going to be enhanced by additional crossings/projections or thinned out by non-required edges and each edge is weighted by real-world network construction costs. The goal is to generate a connected network graph, which represents an acceptable tradeoff between its size (number of edges and vertices) and its validity in relation to the corresponding real-world conditions. We will give a detailed insight of our graph generation approach followed by an analysis of the experimental graph generation and optimization runs. The structure of the paper looks like the following: (1) Motivation, (2) Introduction into graph theory and network optimization, (3) Spatial data – sources and quality, (4) Details of the approach, (5) Experimental results. Bachhiesl P., Prossegger M., Stoegner H., Werner J. and G. Paulus (2004). “Cost optimal implementation of fiber optic networks in the access net domain”, Proceedings International Conference on Computing, Communications and Control Technologies, 14-17 Aug 2004, Austin, Texas, pp. 334–349. Prossegger M., Bouchachia A., (2009). “Incremental Identification of Topological Errors in Spatial Data”, Proceedings The 17th International Conference on Geoinformatics, 12-14 Aug 2009, Paris, France, pp. 1-6. Prossegger M., Bouchachia A., (2011). “Incremental Semi-automatic Correction of Misclassified Spatial Objects.”, Proceedings of the Second International Conference ICAIS 2011, Klagenfurt, Austria, September 2011. Berlin, Heidelberg, New York: Springer Verlag GmbH, September 2011, pp. 16-25. Dynamic GIS (273) Bruce Westcott, Desmond Khor Geospatial data is fuel, that when sparked by change on the earth's surface, drives the Dynamic GIS to exploit the wealth of content in the 5D Information Cloud. This keynote will evaluate geospatial market trends, including the evolution of remote sensing and the merging of geospatial technologies. There is now a synthesis of desktop, web and mobile applications with the ability to rapidly transform raw data into actionable information, and deliver this information anywhere. This includes on-demand web-based geoprocessing, integrated vector and raster-based spatial modeling, change detection and data revision based workflows based on the fusion of imagery, point cloud and GIS data, ultimately providing live-feeds of event-specific, time-specific, and location-specific information about our changing world. Parallel Session 3.6 (Room 2101) Industry Showcase II: China and Quebec Wuda Geoinformatics Co., Ltd. KQ GEO Technologies Co., Ltd. Eastdawn Corporation Shaanxi Tirain Science & Technology Company Limited Aerial Photogrammetry and Remote Sensing Co. Ltd. of China National Administration of Coal Geology (ARSC) Satellite Surveying and Mapping Application Center, NASG Xi'an Dadi Surveying and Mapping Co., Ltd. CRIM Fujitsu Intelli3 Spatialytics inc. Trifide Group Université Laval Québec City Québec International Ordre des arpenteurs-géomètres du Québec Parallel Session 4.1 (Room 205A) Spatially Enabling Government IV New Open Standards for Emergency Response in Developed and Developing Nations (95) Steven Ramage This paper will describe how international open standards are continuing to evolve, providing expanded capabilities for integration and communication of geospatial information, and the authors show how these standards facilitate communication to support emergency and disaster management, with a particular focus on mobile Internet applications. Throughout the stages of disaster management -- planning, preparedness, mitigation, response and recovery -- managers, responders and impacted citizens need to publish, discover, assess, access and use geospatial data across institutional and jurisdictional boundaries. Emergency planning and response involve similar requirements. Public and private organizations focused on reducing risk and loss find it is necessary to routinely integrate spatial information from multiple sources and publish spatial information for diverse users. Over the last decade, framework of open standards has come into wide use, and the geospatial standards component of this framework continues to expand and track the progress of more general purpose information and communication technology standards. Mobile communications has motivated two recent standards in particular that provide emergency and disaster managers with opportunities, even in very poor regions, to gather information from and disseminate information to the public as well as first responders and relief workers. One standard provides software developers with a standard interface for geosynchronizing field collected data across multiple repositories as well as other online users. This means that diverse clients and servers can be more easily employed to provide multiple users with a common operating picture. Without such a standard, it is very difficult for geospatial information updates, such as those from disaster relief field workers, to be made to one or more databases that need to contain reliable, up-to-date information. The second standard provides a standardized way for even the simplest location-aware mobile phones equipped with text messaging to send location information to other mobile phones or to call-in centers. The paper will show how requirements submitted by emergency and disaster managers have played an important role in the development of such standards in the OGC, and it will also show the importance of the OGC's collaborative effort with other organizations to promote adoption, implementation and deployment of the standards. SEDAC collaboration with Land Atmospheric Near real-time Capability for EOS (LANCE) (159) Sneha Rao, Mark Becker Providing critical data in a near real-time period at the beginning of a disaster to the community plays an important role in the efficient management and assessment of a disaster. The NASA/GSFC Land Atmosphere Near real-time Capability for EOS (LANCE) is an on-going effort to integrate a variety of imagery and gridded data products. This includes near real-time satellite data from the Terra, Aqua and Aura missions combined with contextual information on population and other socio-economic information provided by SEDAC to assist in meeting the needs of the application user communities. These users often need data much sooner than routine science processing allows and are willing to trade science quality for timely access. The integrated system provides user access to all level-2 imagery products within 2.5 hours of observation along with gridded population density data at 1 km and 5 km resolution for population densities in excess of 100-people/sq. km via a web mapping service This provides an overview of macro scale issues for a wide range of purposes from weather forecasting to monitoring natural hazards. This facilitates response to a situation within hours of an incident and enables efficient allocation of resources. The Evjafjallajokull eruption in Iceland, and the tsunamis in Japan Mexico are recent examples of applications that benefited from using readily accessible and timely LANCE data. As the event progresses, more detailed science quality products (24-48 hours latency) available from other applications and sources should be used for research to fill in gaps with finer resolution. The Victorian Emergency Management Continuum and the Benefits of Spatial Enablement (71) Ged Griffin, Abbas Rabjabifard, David Williams We live our lives in the real world and the real world has three key attributes that influence our lives, namely; time, activity and space. Traditional approaches towards emergency management have only focused on the key elements of prevention, response and recovery. This approach limits emergency management in the domain of time and activity. As a result this approach overlooks the need to explicitly focus and build spatial enablement across all elements of emergency management. This paper examines the emergency management arrangements within Victoria, Australia, since 1983, and suggests that these arrangements are a continuum that is constantly evolving. The paper argues that the development of a common operating picture through spatial enablement and interoperability is critical aspect of emergency management. High resolution spaceborne imagery for emergency response through faster image processing and analysis using cutting-edge remote sensing algorithm (225) David Dubois, Richard Lepage, Mathieu Benoit The ever growing number of life-threatening natural major disasters prompts the need of faster and more efficient crisis response from government and agencies around the world. For more than ten years, the International Charter for space and major disasters has provided means for authorized users to request satellite imagery to help emergency response teams when a disaster occurs. Each year, an important number of activation of the Charter takes place generating huge amounts of data to process. Most notable recent activations are those of March 2011 Earthquake/Tsunami that hit Northeast Japan and the January 2010 Earthquake of Haiti. Those two events alone generated tens of Terabytes of data acquired by various aerial and space borne platforms. Useful maps took days to prepare by teams comprising hundreds of photo-interpreters manually inspecting the images. This paper aims to address issues with current manual analysis methods for crisis map generation. The goal is to achieve semi-automated processing of important quantities of information extracted from images by a reduced number of people enabling a faster and more efficient workflow. The focus is on the detection of buildings in pre-event and post-event images by using object-oriented extraction and classification with minimal human intervention. Fast segmentation and multi-scale analysis are used to extract objects and features. Those features are then used to classify objects as either building or other. A matching algorithm is then used to match objects in both images. Building damage can then be estimated through feature differences. The case of the Haiti Earthquake is used to demonstrate the usefulness of the proposed method. Design and Development of SDI for State Government (17) Mohamed Bualhamam, United Arab Emirates The Emirate of Ras-Al-Khaimah (RAK) is the fastest growing Emirate in United Arab Emirates (UAE), the Emirate’s strategy is to adopt modern technologies for in enabling better governance style and continuous improvement of life-style for its citizen’s. RAK has realised that in the emerging information market-place, geographic or geo-spatial information occupies pre-eminent position. Spatial technologies of aerial digital image collection and processing; generating high-quality maps; GPS Survey and processing; GIS databases and integration and generating 3D city Models have becoming very advanced and sophisticated. These, and other emerging information technologies, are allowing for the development of Spatial Data Infrastructure (SDI) that allow agencies to better monitor and better manage and govern societies. RAK has historically kept hard copy records of its geospatial data. In recent years, the Emirate used AutoCAD mapping. This standalone system, as a whole, did not support additional staff responsibilities resulting from transactions and record growth as issuing site plan and building permit applications increased approximately 60 percent over the past year. RAK was beginning to discover the potential value of Geoinformatics technology as a total data management system that fosters an efficient decision-making process. The Emirate establishes SDI project, parcel mapping and other spatial data have been migrated from AutoCAD to GIS format, thus providing greater access to other GIS information. Maps and data will be available to the public through ArcIMS. This paper will look at how RAK has been able to justify and implement an enterprise wide spatial database and high GIS technologies. The paper will also demonstrate how Ras al Khaimah SDI project was process and constructed. This will be shown by analyzing the implementation of the project and its benefit to the Emirate. The ever growing number of life-threat Parallel Session 4.2 (Room 204AB) GEOIDE developments in geospatial technology Automatic Image Rectification for Motion Analysis of Highway Traffic Surveillance Video (248) Eduardo R. Corral-Soto, James H. Elder Most automated traffic surveillance systems use inductive loops to estimate traffic conditions such as traffic density. The main drawbacks of this technology are the high installation and maintenance costs and the limited information it provides. In addition to inductive loops, highways in many urban areas are monitored by video cameras. For example, the Ministry of Transportation of Ontario monitors highways in the Greater Toronto Area with a network of video cameras. This camera infrastructure could be used not only for traffic density estimation but also more complex tasks such as vehicles tracking and classification as well as low-cost automatic traffic density estimation. One of the challenges in using highway camera data for traffic analysis is that the external parameters of each camera (pan/tilt/zoom) may be changed several times a day. Thus successful deployment of computer vision traffic analysis algorithms depends upon reliable algorithms for automatic camera calibration. The work presented here is focused on the problem of rectifying highway images automatically with the purpose of performing motion analysis such as speed estimation. The highway lane dividers are of high interest because they provide important information about the scene such as parallelism and projective distortion cues that can be used to rectify the image, as well as motion direction priors that can be exploited to improve motion analysis. In our work we first propose a method to automatically extract lane divider patterns from highway images. Then we present a novel algorithm to rectify highway images given the extracted lane dividers. Unlike other published methods, our system is capable of rectifying the images without the use of vanishing points or motion information. We demonstrate our approach on the estimation of motion parameters such as speed, computed on rectified highway images. A new approach for segmentation of multi-resolution and multi-platform LiDAR data (251) Zahra Lari, Ayman Habib Over the past few years, LiDAR systems have been extensively used for the acquisition of high accuracy three-dimensional spatial data. With increasing quality, availability and affordability of multi-platform multi-resolution LiDAR data, there is a growing demand for the development of adaptive techniques for processing of these data. Segmentation is the primal and fundamental step of an efficient LiDAR data processing procedure. The objective of a segmentation approach is to cluster homogenous regions and introduce some level of organization in the data before further processing. This paper proposes an adaptive segmentation approach for multi-resolution multi-platform LiDAR data. In the first stage, the neighbourhood of each point is established using an adaptive cylinder definition. This neighbourhood definition takes into account the local point density variations and surfaces trends. Afterwards, the segmentation attributes will be computed based on defined neighbourhood of each point. In order to efficiently cluster planar surfaces and avoid introducing ambiguities, the coordinates of origin’s normal projection on the best fitted plane for each point’s neighbourhood are utilized as segmentation attributes. An octree space partitioning method is then applied to detect and extract the peaks in the attribute space. The detected peaks represent clusters of points with similar physical properties in the object space. This clustering method dramatically increases the computational efficiency of the proposed segmentation procedure. While considering the local point density indices and physical properties of associated surfaces in each dataset, this approach is applicable for multi-resolution datasets acquired by different LiDAR platforms. Experimental results from multi-platform datasets (airborne and terrestrial) will demonstrate the feasibility of the proposed approach for segmentation of LiDAR data. Towards Understanding Of Urban Scenes: Recovering Pose And Structure Using Linear Constraints (245) Ron Tal, James H. Elder Databases such as Google Earth provide 3D models of most urban environments. These models can be refined and elaborated using building plans, photogrammetry and LIDAR methods. However, these are static models, whereas our cities are dynamic, living environments with people and vehicles moving about. How can we breathe life into 3D urban models to reflect these dynamics? One possibility is to use video camera imagery, which provides snapshots of urban life at up to 30 frames per second. These dynamic data are strictly 2D, and so a major challenge is to "three-dimensionalize" these 2D data so that they can be interpreted in terms of our full 3D city models. Fortunately, urban environments often conform to the so-called “Manhattan” assumption, i.e., the main surfaces often conform to an orthogonal 3D coordinate system. This constraint can in principle be used to recover 3D structure from a single image, up to a scale factor. An important part of this problem is to automatically compute the 3D attitude of embedded cameras relative to the underlying urban structure, using linear perspective cues in the image. Extracting lines from the image can provide a more robust estimate of the 3D coordinate system of the Manhattan frame. A second important problem is to group line segments in order to recover quadrilaterals that correspond to building surfaces in the scene. In this work, we present methods for accurate 3D orientation and recovery of quadrilateral surfaces. Pose estimation is performed in three stages: line-feature extraction, line-feature association and maximum-likelihood orientation estimation. In the first stage, lines are extracted using a probabilistic Hough transform that accurately propagates uncertainty in edge observations to the Hough domain. Lines are then selected using a greedy method that dynamically updates the Hough map by probabilistically subtracting votes that correspond to edges that belong to previously detected lines. In the second stage, 2D extended lines are associated to the overall 3D structure using a probabilistic mixture model. In the third stage, a maximum-likelihood estimate of the 3D orientation of the scene is attained. Extracted line segments, as well as their estimated 3D orientation are used in order to generate valid hypotheses for rectangular facades in the scene. Modeling and evaluating trust for indoor positioning systems (85) Ting Wei, Scott Bell The Global positioning system (GPS) provides accurate and ubiquitous positioning in outdoor environments; unfortunately it fails to provide reliable positioning results in indoor settings. As a result several supplementary techniques (Bluetooth, Cellular, wireless internet (WiFi), Radio Frequency ID (RFID), Ultra Wide Band (UWB), etc.) have been used to provide positioning in settings where GPS does not function. However, the accuracy of calculated results varies among techniques and algorithms used; system performance also differs across testing environments. As a result, users’ responses and opinions regarding positioning results could be different. Furthermore, user trust, most closely associated with their confidence in the system, will also vary. For indoor positioning systems, trust is a relatively new concept. Most computing and engineering literature treats trust as synonymous with accuracy or usability; however, little research has considered the role of user-device interaction in trust. As a result, understanding user trust becomes more important in terms of achieving better system design. In our model of positioning trust, four main elements are used for modeling trust: 1. positioning calculation, 2. positioning source data, 3. user, and 4. Graphic User Interface (GUI). Trust can be increased by improving actually accuracy as well as perceived accuracy, which not only requires a nominal level of accuracy but also an appropriate interpretation by the user that is consistent with their knowledge along with a clear interface. For evaluating trust, we do not consider those inaccuracies caused by inadequate and inaccurate positioning source data, deficient techniques or algorithms, or an unclear GUI. Because each can be optimized by designers before building an indoor positioning system, we focus on the inaccuracies caused by positioning calculation errors that cannot be easily predicted and controlled in a real world setting. In addition, we argue that user trust does not simply increase with the perceived or actual accuracy of a system; on the contrary, trust will change with system performance from time to time and from setting to setting, at times independent of changes in accuracy. An experiment was designed to examine if the sequence of location accuracy will affect a user’s trust in an individual positioning result as well as the system overall. The simulated positioning system used for this experiment will provide 10 priming positioning results at a specific nominal level of accuracy (the accuracy is controlled) before a group of rotating positioning results which have random accuracy. The basic hypotheses are: 1. inaccuracy will negatively affect trust; 2. trust will be lower if initial location information (10 priming positions) is deemed untrustworthy; 3. accurate locations will have no impact on the distrust of inaccurate locations. This presentation will focus on the design of the experiment and some preliminary results which illustrate how users’ responses vary among individual positioning results and how their trust in the system changes. Automated Generation of Informed Virtual Geographic Environments Using GIS Data (209) Mehdi Mekni, Bernard Moulin, Normand Bergeron [paper: refereed proceedings article] In this paper, we propose an automated approach to generate Informed Virtual Geographic Environments (IVGE) using data provided by Geographic Information System (GIS). The resulting IVGE provides a geometrically-accurate and semantically-enhanced spatial representation of the real world for visualization and simulation purposes. Conventional VGE approaches are generally built upon a grid-based representation, raising the well-known problems of the lack of accuracy of the localized data and the difficulty to merge data with multiple semantics. In contrast, our approach uses a graph-based topological model and provides an exact representation of GIS data. Moreover, our model can integrate, merge, and propagate several semantics, even if spatially overlapping. In addition, the proposed IVGE contains spatial semantics which can be enhanced thanks to a geometric abstraction process. We illustrate this model with an application which automatically extracts the required data from standard GIS files, and allows a user to navigate and retrieve information from the computed IVGE. Automatic Extraction of Building Models from LiDAR Using the Minimum Bounding Rectangle Algorithm (250) Mohannad Al-Durgham, Eunju Kwak, Ayman Habib Today, the extraction of urban features from LiDAR data remains an active topic of research due to its complexity. Multiple algorithms have been proposed for the extraction of Digital Building Models (DBM) from airborne LiDAR data. However, many of these algorithms still require a level of user interaction. In addition these algorithms do not address cases were various LiDAR datasets with varying point densities and average point accuracies are integrated into one dataset. In the latter case, careful attention should be paid towards all the stages of LiDAR data processing. In other words, each step taking towards the extraction of DBM should be revisited. The algorithm starts first by examining the matching quality between overlapping LiDAR strips using the Iterative Closest Projected Point (ICPP) algorithm. Next, a region growing segmentation procedure is performed to extract planar surfaces. In the latter step, a ground vs. non ground separation is also performed. Afterwards, the modified convex hull algorithm is used to estimate the coarse boundary of the building rooftops. Finally, the proposed recursive minimum bounding rectangle (RMBR) algorithm together with a two dimensional boolean operator are used to produce the final boundary. Examples of the RMBR final boundary are presented and their quality is examined against boundaries extracted using photogrammetric means. Finally, a proposal for future work is presented to discuss the possibility of expanding this algorithm to cover complex structures. Parallel Session 4.3 (Room 2103) Experiences & Case Studies II Spatial enablement: including ‘location’ in your thinking, problem solving and decision making (111) Dan Paull It is widely know that location can add considerable value to the decision making process. The challenge is to influence the process so that location becomes part of the thinking. Traditional business processes are not inherently spatial. They do not consider the location dimension. This is not surprising as the technology and data necessary to support such a process has been expensive, difficult to master and information hard to source. So the first step is recognising that location can easily be used to improve business outcomes and for the thinking that underlies the process to include consideration of the ‘where’. The second step is to be able to introduce location into the process itself but do so simply, quickly and cost effectively. This paper will explore two critical aspects to the achievement of ‘spatial enablement’ and how this is being achieved in Australia. The first is the central focus on address, the key to extracting value from location. Address is the one attribute that is virtually universal across business and government. It provides the link between complex spatial information and business applications. Once addresses are geocoded, a whole world of location opportunity is opened up. The governments of Australia have recognised the important role of address by establishing, across all governments, a framework for its management. The second part is the application of web services to allow the seamless shift from ‘thinking location’ to business processes ‘using location’. Through the use of web services, location is applied to existing processes, workflows and applications in an easy and reliable way. By using address as the key entry point, it makes it possible to easily integrate existing business systems with a wide range of address management, validation, spatial directories look-up and geocoding services. By combining address and web services to ‘location’ in your thinking, problem solving and decision-making, it is possible to make spatial enablement a reality. PSMA Australia is a governments-owned public company limited by shares with a focus on delivering national spatial datasets to the nation and ensuring that citizens achieve spatial enablement through the use of innovative technology and creative thinking. SDI Past, Present and Future: A Review and Status Assessment (59) Francis Harvey, Adam Iwaniak, Serena Coetzee, Antony K. Cooper [paper: refereed book chapter] A spatial data infrastructure (SDI) is an evolutionary concept related to the facilitation and coordination of the exchange and sharing of spatial data and services. Since its initial use, the SDI concept has shifted emphasis from a focus on data sharing and coordination to one on supporting policy, from a top-down approach to a bottom-up approach, and from centralised to distributed and service-orientated approaches. Today, SDIs are part of the mushrooming of cloud-based and location-based services, neogeography, crowd sourcing, volunteered geographic information (VGI) and standards for collecting and sharing geographic information. What will the role of SDIs be as changes continue? What comes next for SDI development? A reference point is the UN Economic and Social Council (ECOSOC) Programme on Global Geospatial Information Management (GGIM) to address key global challenges, such as climate change, food and energy crises, peace operations and humanitarian assistance. For the success of such programmes, it is important to understand the development of the SDI concept. This paper offers an initial examination of differences in SDI developments in three countries on three continents. Our aim is to develop a scientifically grounded perspective on how GIS became SDI and continues to change. We use the analogy of the human development stages to organise our description of the development of SDIs in Poland, South Africa and the United States of America (USA). First principles of SDIs are evident from the comparison of the evolution of SDIs in three countries on three continents. These principles reflect the needs that information technology can fulfil: the need to support decisions, the need to share, the need to coordinate, the need for policy, the need to keep up with technological developments and the need for standards and specifications. Our assessment is that SDIs remain important and significant for public administration and also for other actors despite industry, technological advances, changing business models, VGI and neogeography activities. Web-based repositories provide geographic information for growing consumer-orientated applications, but the geographic information collected and maintained by public administrations will remain a driving force for developers requiring or wanting the reliability of authoritative geographic information. Is SDI development waste of time and money for Pakistan? (24) Asmat Ali, Munir Ahmad One of mankind’s greatest challenges has been to maintain an optimal natural environment. To meet this challenge, reliable information, information systems as well as Information Infrastructure (II) such as Spatial Data Infrastructure (SDI) is considered vital. Indeed, there is need to pay attention to the development and application of Geographic Information Systems (GIS) tightly coupled with earth observation systems for solving problems such as climatic changes, determination of risks of landslides, planning of urban infrastructures, as well as detecting environmental pollution. But the real challenge is the accessibility to different kinds of spatial as well as non-spatial datasets, available with different organizations without which no GIS as well as Remote Sensing (RS) application can be developed. For example, to ascertain the climatic changes of a place, integrated spatial and non-spatial temporal data related to several factors is required including distance from equator, annual average temperature and precipitation, elevation of the area, distance from sea and forest cover. A single organization can not collect, update and deliver data in integrated form covering such factors due to human, technical, financial and mandate constraints. This is one of the reasons that many countries are developing Spatial Data Infrastructures (SDIs) at different administrative levels such as National Spatial Data Infrastructure (NSDI) at federal level, so that sharing of data could be made possible from multiple organizations. Although, SDI gurus argue that SDIs are cost-effective and practical solution for data and information asset management, data governance, stewardship and sustainable development programs. But governments especially of developing countries such as Government of Pakistan (GoP) still doubts benefits of NSDI. Therefore, the question, “Is SDI development waste of time and money?” is viable in Pakistani context. To answer the question, this paper explores Pakistan’s geospatial industry and its stakeholders, GIS as well as RS related projects that are being implemented for the well being and social uplift of the masses. The results drawn from study carried out in this research reveal that implementation of Pakistan’s NSDI will augment and support e-government initiative of the country, spatial and non-spatial data integration efforts as well as online delivery of geoinformation to stakeholders. The paper concludes that NSDI for Pakistan is dire need of the day and must be given a serious thought to address multi-dimensional issues including climatic threats. The European Location Framework – How to Build a Working Infrastructure Based on Reference Data from National Sources (114) Antti Jakobsson EuroGeographics is a not-for-profit organization representing 56 national mapping, land registries and cadastral agencies (NMCAs) in 45 countries. It has a long experience on building harmonized datasets based on its members’ data. Currently we provide data for Global and European usages through our products. However, harmonization based on different national standards and content is not easy and NMCAs resources are not increasing in this economic environment. INSPIRE directive sets a new basis for rethinking how we can meet the interoperability challenge in Europe. How we can make it happen in reality for European reference data is a major challenge. Therefore EuroGeographics with its partners are suggesting to build a European Location Framework. This paper will discuss how it can be achieved from a technical aspect. Another dimension is a needed political support which will be dealt with another paper. Completed in February 2011, the eContentplus funded European Spatial Data Infrastructure Network (ESDIN) project was a collaboration between 20 consortium partners to help prepare data for the INSPIRE Directive. It was coordinated by EuroGeographics, and focused on the best way to use existing national spatial data infrastructures (SDIs) to create a European SDI. It successfully showed how data from European NMCAs can be harmonised to meet INSPIRE obligations whilst also addressing issues such as generalisation, quality evaluation, edge-matching and access control. An advantage in using official data from the national mapping and cadastral agencies is that they design their data for generic purposes. However even with INSPIRE-compliant data users can suffer from an unreliable reference if the pan-European or cross-border data is not: • edge-matched correctly at a national boundary; • quality assured • generalised consistently; or • lifecycle managed effectively . The realisation of the European Location Framework (E.L.F) The E.L.F is based on a set of specifications for reference data. These specifications support interoperability across resolutions and themes and between countries for topographic, administrative and cadastral reference data. The E.L.F will be the basis for the official framework providing the location information needed to geographically reference objects from other domains allowing pan-European interoperability. The E.LF is not a paper exercise, we need to build reference data services and ensure that these services are funded from Member States, the European Commission and users. It is clear also EuroGeographics cannot do it by itself and we need an active community of users and other data providers, developers and service integrators. We have set up a task force to cement plans for the E.L.F. This task force is open to all organizations willing to contribute to the building of the E.L.F. In fact half of the WGs are lead outside of the EuroGeographics membership. Adoption of the E.L.F specifications is already planned at both global and regional level. EuroGeographics has already created a view service based on its existing products (www.eurogeoinfo.eu). A cloud based geospatial reference data service is also envisaged (EuroGeoCloud). Gazetteers an SDI Indexing and Integration Mechanism: A Case Study of the UNSDI Gazetteer Framework for Social Protection in Indonesia (178) Paul Box, Rob Atkinson, Suha Ulgen, Laura Kostanski To address the complex interwoven social, economic and environmental issues facing communities at local to global scales, information held by agencies working in different sectors and at different scales, needs to be made more accessible. Much of this information has a critical spatial dimension as it relates to a place. People describe and relate to the human and natural world through the use of place names. Thus, gazetteers or directories of place names with associated location information, play a critical role in spatially referencing or ‘geo-coding’ information holdings. Gazetteers also play a critical role in enabling information about specific places held in multiple systems, to be integrated using place names or identifiers. However, systems containing information to be integrated, implement different approaches to spatial referencing using gazetteers. In addition, there are typically a number of gazetteers in use in any given scenario, including formal and increasingly, informal, crowd-sourced gazetteers. Consequently, it is time-consuming and expensive to find, access, interpret, transform and integrate information from different sources referenced using different gazetteers. These challenges are particularly significant when attempting to rapidly assimilate information about subtle and often rapid changes to a community’s, wellbeing. One such example of this is the assessment and monitoring of vulnerable populations to enable rapid social protection responses to global and local shocks. Shocks such as the global financial crisis, or a localised livestock disease can have significant impact on the livelihood of people living in or close to poverty. Social protection, aims to buffer vulnerable population from shocks, but requires rapid intervention based on up-to-date and fine-grained information. This paper describes a project that aims to improve access to, and integration and use of data held in different systems through the development of a framework to manage gazetteers. The gazetteer framework is intended to supplement existing national efforts to provide access to spatial data, providing a common mechanism for the registration and use of gazetteers at multiple scales. The project is being implemented in Indonesia with a focus on the use of gazetteers for social protection but is intended to be a pilot for a broader initial capability of the UN Spatial Data Infrastructure. The paper will provide a brief review of current gazetteer practice and identify critical issues from an SDI perspective. The paper will describe key aspects of the proposed gazetteer framework including the ability to register gazetteers and create an integrated index of place names from multiple gazetteers, which are exposed through common mechanisms. The framework will also enable gazetteer users to provide feedback to data providers and to explore social network views of gazetteer usage. The paper concludes with a discussion about the potential of the gazetteer framework to act as both a feature level index of SDI data holdings and an SDI bridging mechanism that enables the realisation of the SDI vision of re-use of information resources in multiple contexts. Parallel Session 4.4 (Room 205B) Legal, Economic and Institutional Challenges III Europe Needs a Location Strategy (195) Dave Lovell, Joep Crompvoets EuroGeographics is a not-for-profit organization representing 56 national mapping, land registries and cadastral agencies (NMCAs) in 45 countries. It has a successful track record in contributing constructively to the development of relevant European Union policies and geospatial services. EuroGeographics’ harmonized pan-European products, based on its members’ data, are used extensively within the European Commission as a basis for better understanding many diverse components of European Society. EuroSDR is a not-for-profit organization linking the national mapping and cadastral agencies in Europe with research institutes and universities for the purpose of applied research in spatial data provision, management and delivery. This research-oriented organization undertakes applied research projects, hosts focused workshops, publishes an official series of reports, delivers an annual series of short distance learning courses, contributes to the development of specifications and standards by OGC, ISO and CEN and participates in the drafting of the INSPIRE implementing rules The INSPIRE directive has been very successful in stimulating the development of national spatial data infrastructures in, and beyond, the 27 Member States, but does little to integrate these at a European level. The GMES regulation, applicable to EU27, recognizes the importance of avoiding duplication of national geospatial information. Following the successful completion of the eContentplus funded European Spatial Data Infrastructure Network (ESDIN) project, EuroGeographics, with its partners, are proposing to build a European Location Framework (E.L.F). The E.L.F. is described in a separate paper. At much the same time the Joint Research Centre of the European Commission have proposed a work programme to deliver the European Union Location Framework (the EULF) It is clear that the European Commission have recognized the importance of location. For example on 31 August 2011, in her response to a European Parliamentary Question , Ms Viviane Reding acknowledged the importance of geo-spatial data to environmental management and economic activity when she said ‘Geolocation data are a strong driver for the development of a new generation of mobile internet services. Location aware services can offer new benefits for users and society, from optimizing transport and reducing environmental impact to new ways of location aware advertising.’ Many countries in Europe, and beyond, are in the process of, or have already adopted Location Strategies. This paper will discuss the importance of political support and sustained funding to achieve better coordination of geo-information at European level and it will pose the question ‘is now the time to agree on a European Location Strategy An overview on the status of SDI relevant issues in PC-IDEA member countries (96) Álvaro Monett Hernández, McLeod Paula An overview on the status of SDI relevant issues in PC-IDEA member countries Paula McLeod Alvaro Monett Hernández In order to advance SDI in the Americas, the Permanent Committee for Geospatial Data Infrastructure of the Americas (PC-IDEA) was established in 2000 based on a recommendation of the 6th United Nations Cartographic Conference for the Americas – UNRCC-A. A working group on planning (GTplan) was created, following a meeting of the PC-IDEA Executive Board in New York, May 2010, to give response to the recommendations of the 9th UNRCC-A. This working group is composed by representatives of México, Cuba, Brazil, Guatemala, Colombia, Canada and Chile. One of the principal products considered in the workplan developed by GTplan for 2011 is the elaboration of a diagnosis about relevant SDI issues in the member countries, including capacity building, standards and technical specifications, best practices, innovations in national cartographic and geographic institutes and SDI assessment. In order to carry out this diagnosis, a questionnaire was developed to address each topic of importance for SDI development (five countries of the working group were in charge of each of the mentioned themes). The structure and the contents of the questionnaire were discussed in a meeting of PC-IDEA GTplan last April in New York. After that, there was a brief period for getting the final consensus and creating the informatics tool for applying the questionnaire to the member countries of PC-IDEA. Between July and September 2011 a process of consultation with the support of the national PC-IDEA counterparts and also the regional vocals of Caribbean, North, Central and South America, was conducted and the questionnaire was sent to the 24 PC-IDEA member nations for their response. The results were processed in October and November, elaborating databases, pivot tables, and charts. The idea is to leverage the database generated from the survey in order to use it for temporal comparative analysis in the coming years. This paper presents a summary of the main results of the survey, including the response of 20 out of 24 member countries of the Permanent Committee. These results were the basis for the formulation of specific working plans to be carried out since 2011 to 2013 mainly in capacity building, best practices, standards, and institutional issues. From evolution to operation: challenges in making a SDI work and develop in everyday life (57) Martin Salzmann SDIs have evolved from the evolutionary via the development into the operational phase. In this paper we discuss our experiences in operating a SDI. In the past decade we have seen in the Netherlands a lot of scientific and policy interest in the development of SDI. In Europe the INSPIRE directive has been a driving force; at the national level the eGovernment programs have been fuelling interest in SDIs. We have experienced that now the policy interest in SDIs is slowly decreasing the developments are actually just as challenging in keeping our SDI up to date, up and running and satisfying our incerasing users’ needs. We will specifically focus on the challenges we experience in the Netherlands: - Institutional: starting from a network apporach we see an increasing concentration of operators with the aim of achieving maximum efficiency and effectiveness. This affects the organization of SDI. - Financial: the financial crises is affecting the budgets. How can we improve and sustain with limited resources and how can we team up with other initiatives in society? - Integration: the spatial element of SDI becomes embedded in the overall data infrastructure. It is not easy to convince the eGovernment that spatial is special. - Political: a clear shift from supplying a data infrastructure to a service infrastructure that acommodates e-transactions. This not only happens at the national level, but also increasingly at the regional level (European Digital Agenda). - Formal and informal data: both professional users and citizens expect to operate a in a readibly available public data infrastructure. It this insfrastructure is not present user domains (via VGI-mechanisms) or professional parties (e.g. Google) will create parallel ‘informal’ data infrastructure. How do these affect or contribute to a SDI? - User demands: users focus on the quality of service. Quality of the underlying data is taken for granted, but still requires a major effort in maintaining it. - Interoperability: at the technical level interoperability has been largely achieved: we now see that interoperability at the semantic and process level is becoming paramount. In our contribution we will illustrate how we operate our SDI taken these developments into account. Our objective is to create a SDI (integrated in a DI) that is sustainable and at the same time is responsive to numerous developments in society. This requires also that the measures to asssess the maturity of SDIs have to be reconsidered. Practical Geospatial Policies: Resolving Operational Issues to Optimize Your SDI (241) Ed Kennedy, Cynthia Mitchell, Simon Riopel Canada’s GeoConnections Program is coordinating the development of the Canadian Geospatial Data Infrastructure, or CGDI, one of the more advanced spatial data infrastructure (SDI) implementations in the world today. Since 1999, GeoConnections has focused on developing the CGDI’s technological framework, standards, communities of practice, framework datasets and thematic applications, and has enabled industry to build SDI products and services. Now in its third five-year mandate, GeoConnections is facilitating widespread utilization of the CGDI and continuing to identify and remove barriers to information access and use. This is being addressed at a practical level through the development of operational policies – practical instruments such as guidelines, procedures and manuals that address the lifecycle of geospatial data (i.e. collection, management, dissemination, and use) and key operational and legal issues that can impede the functionality of an SDI. To support the advancement of geospatial operational policies, GeoConnections has retained the services of a group of experts under the leadership of Hickling Arthurs Low Corporation, Canada's premier consultancy specializing in innovation policy and economics for organizations using or supporting science and technology. This presentation will describe the range of operational policy issues impacting SDI implementation and use being addressed in this effort, such as privacy, archiving and preservation, intellectual property, licensing and copyright, and data sharing. At this presentation you will learn about the breadth of this operational policy work and specific recent accomplishments, including newly released guides on: • Volunteered Geographic Information – Introduces and examines needs, issues, lessons learned and good practices in geospatial operational policies that help enable VGI (e.g., copyright interests of VGI contributors and recipients, data quality and assessing the credibility of contributors, liability potential for contributors and recipients, etc.) • Cloud Computing – A primer that examines the use of cloud computing in the geospatial domain, and operational policies and best practices pertaining to cloud computing solutions, including security, privacy, regulation, standards, etc. • Privacy – Examination of the legal context within which geospatial privacy issues may arise, and guidance on making decisions related to the collection, use, disclosure and retention/disposition of geospatial information that is deemed to be personal. These guides represent some of the ground-breaking work that Canada has undertaken to resolve important legal, socio-economic and institutional challenges that all nations face in the effective development and implementation of SDI. Parallel Session 4.5 (Room 2104B) Basic and Applied Research III Québec mineral potential evaluation program: hybrid fuzzy logic modelling and investment impact assessment (28) Daniel Lamothe, Charles Maurice The Ministère des Ressources naturelles et de la Faune du Québec has undertaken since 2005 a mineral potential assessment program aiming at targeting promising areas for the discovery of significant metal deposits in the province. The program relies on the SIGEOM, one of the most important public geological database in the world. Data processing involves weighting and combining various geological parameters that are relevant to the metallogenic model investigated in order to generate a predictive mineral potential map. The weight of each parameter in the model is calculated as a function of its spatial relationship to known deposits. Parameter combinations are performed using a constrained fuzzy logic approach. The selection of a significant and predictive threshold among the spectrum of fuzzy values of the combination process allows the delineation of target exploration zones. The data modelling is done using the ModelBuilder tool in ArcGIS 9.3 and the process for evaluating the metallogenic models is entirely automated. The possibility of quickly testing new parameters or different calibration sets represents a significant improvement to the mineral potential assessment process. Targets release through the Quebec GESTIM online claiming system has ensured a swift response from the mineral exploration community, while allowing to gauge the monetary impact of subsequent exploration efforts on the targets. Dealing with Uncertainty in Coastal Risk Assessment: Fuzzy Representation of Coastal Risk Zones (120) Amaneh Jadidi Mardkheh, Mir Abolfazl Mostafavi, Yvan Bédard This paper aims to deal with existing uncertainties in different levels of information regarding coastal risk assessment from data acquisition and analysis, to the modeling and representation of risk zones. Risk assessment techniques require integrating several sources of data to provide a coherent and complete vision of potential risk regarding the phenomenon under study. This includes assessing possible damage on environmental, economical and social features as well as human-life losses. This fundamental information can then be analysed at a higher hierarchical level to choose appropriate actions and strategies to protect the region, its environment, the people and their assets in an optimal way. Complete and high quality data and information are mandatory in this regard to perform accurate assessment and efficient decision-making. Typically, data are likely to be collected and analysed by different authorities or organizations, with different levels of resolution and quality. Uncertainties exist and propagate from the collection, capture, storage, analysis and representation of spatial data to their interpretation and decision-making processes. Uncertainties can also appear as vagueness in boundary zone, ambiguities in linguistic terms, fuzziness in process interpretation, doubt in existence of a spatial object, or a combination of them. Today, ignoring uncertainty in data analysis and decision-making procedures is not considered as an efficient practice anymore. One dimension of uncertainty in coastal risk assessment is originating from risk zones representation. Traditionally, risk zones are represented by polygons that can be defined regarding to stakeholders’ interests or national census segments. Polygons are separated by well-defined boundaries while the degree of risk is attributed homogenously within each polygon considering multiple criteria. However, the way to define the shape of polygons differs among experts depending on their objectives. Likewise, the method to calculate and assign the degree of risk to each polygon is a challenging issue. Moreover, in reality, risk value changes continuously from one point to another. Thus, representing the transition from one zone to another zone with a crisply-defined boundary gives misleading insights of the risk degree of each region to decision-makers. Furthermore, risk has hierarchical characteristics due to the inherent needs and interests of different participants working for different organizational levels. For instance, their interests may be in an object such as a port, certain buildings, or more global like a census track, a city or even a state or country. In this regard, risk zones are complex objects with uncertain boundaries resulting from the fact that their definitions are vague and multi-scale. The flexibility of fuzzy set theory to express risk value, consistent with human reasoning, together with possibility of dealing with uncertainties suggests that it as an efficient solution for spatial representation and communication of the risk. This paper proposes an algorithmic approach based on fuzzy set theory to deal with the problem of ill-defined boundaries of risk zones. Then, a fuzzy object aggregation approach is proposed for multi-scale fuzzy representation of risk zones. Finally, the proposed approach is applied to coastal risk representation in Gaspe region, in Eastern Quebec, Canada for validation purpose. An Ontological Approach to Integrated Groundwater and Surface Water Representation (171) Boyan Brodaric Pressing scientific and societal issues, coupled with the increasing availability of water data, require an integrated approach to surface water and groundwater representations. However, such representations are at present largely disconnected in Spatial Data Infrastructures, because associated data standards are being developed independently. This leads to some incompatibilities between the data, and creates barriers to its joint use in important activities such as determining water balances. A starting point for integrated representation involves development of a shared conceptualization. This work shows that five basic concepts are minimally shared between the surface water and groundwater domains, and that these concepts can be represented ontologically using six classes and relations within the DOLCE foundational ontology. The classes and relations constitute a minimum suite of common ontological primitives, and also serve as a conceptual bridge between the domains. This in turn can help the design and alignment of data transfer schemas, and the resulting ontology fragment can contribute to the foundations of hydrology ontology development. Seismic Risk Assessment in Ottawa, Canada: An integrated desktop/mobile GIS application for building inventory (229) Amid ElSabbagh, Mike Sawada, Murat Saatcioglu, S. Kate Ploeger, Emmanuel Rosetti, Miroslav Nastev An activity is outlined for creating a building inventory for Ottawa that contributes to the adaptation of an existing standardized tool, Hazus-MH, for seismic risk assessment of Canadian urban centers. Specifically, research taking place within the Canadian Seismic Research Network (CSRN) faces the daunting task of collecting individual structural variables for hundreds of thousands of buildings within Canadian cities in order to model earthquake risk and recommend mitigation and response measures. In the City of Ottawa alone (including suburbs), more than 200,000 buildings require assessment and this task requires the development and use of innovative techniques for data collection. For this purpose, we have developed a system that seamlessly integrates desktop GIS, Google APIs and the mobile Android SDK. This system programmed within the .NET environment integrates Google StreetView within desktop ArcGIS for an in-lab virtual assessment of buildings. For on-site assessment, the Android SDK was used to create a custom app that provides seamless communication between the app and add-in via a common XML schema. Our system substantially increases the efficiency of data collection and has allowed us to collect structural information on thousands of buildings within a very small time-span when compared to manual sidewalk survey methods. We present the system architecture as well as the overall findings regarding the building stock in Ottawa and its implications for seismic risk. We illustrate the use of collected data within seismic risk assessment in Ottawa. Our validation techniques are unique and adaptable to other urban centers within the CSRN project and Canada. Our open and novel data collection system can be adapted for other spatial data collection endeavours within urban environments. Parallel Session 4.6 (Room 205C) 3DGeoInfo: 3D Indoor/Outdoor Navigation Modelling 3D Topographic Space Against Indoor Navigation Requirements Gavin Brown, Claus Nagel, Sisi Zlatanova and Thomas H. Kolbe Indoor navigation is growing rapidly with widespread developments in the collection and processing of sensor information for localisation and in routing algorithms calculating optimal indoor routes. However there is a general lack of understanding about the requirements for topographic space information to be used in indoor navigation applications and thus the suitability of existing information sources. This work presents a structured process for the identification of topographic space information starting with use cases that support the complete capture of requirements, thus allowing existing models to be evaluated against these requirements and conceptual semantic and constraint models developed. A proposal is put forward for the implementation of the semantic and constraints model as a CityGML Application Domain Extension (ADE) that will be integrated into the Multilayered Space-Event Model (MLSEM), a flexible framework supporting all indoor navigation tasks. Indoor Localization Using Wi-Fi Based Fingerprinting and Triangulation Techniques for LBS Applications Solomon Chan and Gunho Sohn The past few years have seen wide spread adoption of outdoor positioning services, mainly GPS, being incorporated into everyday devices such as smartphones and tablets. While outdoor positioning has been well received by the public, its indoor counterpart has been mostly limited to private use due to its higher costs and complexity for setting up the proper environment. The objective of this research is to provide an affordable mean for indoor localization using wireless local area network (WLAN) Wi-Fi technology. We experimented with two different Wi-Fi approaches to locate a user. The first method involves the use of matching the pre-recorded received signal strength (RSS) from nearby access points (AP), to the data transmitted from the user on the fly. This is commonly known as “fingerprinting”. The second approach is a distance-based triangulation approach using three known AP coordinates detected on the user’s device to derive the position. The combination of the two steps enhances the accuracy of the user position in an indoor environment allowing location-based services (LBS) such as mobile augmented reality (MAR) to be deployed effectively in the indoor environment. The mapping of the RSS map can also prove useful to IT planning personnel for covering locations with no Wi-Fi coverage (ie. dead spots). The experiments presented in this research helps provide a foundation for the integration of indoor with outdoor positioning to create a seamless transition experience for users. Enhancing the Visibility of Labels in 3D Navigation Maps Mikael Vaaraniemi, Martin Freidank and Rüdiger Westermann The visibility of relevant labels in automotive navigation systems is critical for orientation in unknown environments However, labels can quickly become occluded, e.g. road names might be hidden by 3D-buildings, and consequently, the visual association between a label and its referencing feature is lost. In this paper we introduce five concepts which guarantee the visibility of occluded labels in 3D navigation maps. Based on the findings of a pre-study, we have determined and implemented the two most promising approaches. The first approach uses a transparent aura to let the label shine through occluding objects. The second method lets the feature, e.g. the roads, glow through the 3D environment, thus re-establishing the visual association. Both methods leave the 3D world intact, preserve visual association, retain the label’s readability, and run at interactive rates on medium-class hardware. Finally, a concluding user study validates our approaches for automotive navigation. Compared to our baseline – simply drawing labels over occluding objects – both approaches perform significantly better. Parallel Session 4.7 (Room 2101) Industry Showcase III XYZns Nexteq Navigation MRF Geosystems Corporation Lim Geomatics Inc. LiDAR Services International Inc. Proceed solutions Groupe Systeme Foret Facet Decision Systems Deep Logic Solutions Incorporated Accuas Geo-Plus Sokkia Corporation Géomatique Verville PCI Geomatics Parallel Session 5.1 (Room 205A) Spatially Enabling Government V PC-IDEA Initiatives towards Spatially Enabling the Americas (61) Luiz Paulo Souto Fortes, Esteban Tohá González, Valéria Oliveira, Henrique de Araújo The Permanent Committee for Geospatial Data Infrastructure of the Americas (PC-IDEA) was established on February 29, 2000, based on the Resolution #3 of the 6th United Nations Cartographic Conference for the Americas – UNRCC-A (1997), to maximize the economic, social and environmental benefits derived from the use of geospatial information. This is accomplished through knowledge and exchange of experiences and technologies between countries, based on common standards which would allow the establishment of the Geospatial Data Infrastructure of the Americas. Besides, PC-IDEA implements the regional mechanism associated to the United Nations Global Geospatial Information Management (UN-GGIM) initiative. Currently composed by 24 countries of the Americas – three from North America, seven from Central America, 11 from South America and 3 from the Caribbean, PC-IDEA implements the resolutions of the UNRCC-A conferences, held every four years. This paper describes the activities which have been carried out by PC-IDEA during the 2009-2013 term. Based on the resolutions issued by the 9th United Nations Regional Cartographic Conference for the Americas (UNRCC-A), held in August 2009 in New York, PC-IDEA established a Working Group on Planning (GTplan) during the 5th Executive Board meeting held in May 2010 in New York. This working group is composed by representatives of Brazil, Canada, Chile, Colombia, Cuba, Guatemala and Mexico, under the leadership of Chile and co-leadership of Canada. Three GTplan meetings have been held so far, when a working plan was established covering seven themes, each one under the responsibility of a country representative: institutional capacity building (Colombia); standards and technical specifications (Mexico); best practices and guidelines for the development of SDI (Canada); innovations in National Mapping Organizations (Brazil); knowledge gathering on topics relevant to SDI for the region (observatory on SDI) (Guatemala); assessment of the status of SDI development in the Americas (Cuba); and technological means for discussions related to SDI (Chile). Based on a questionnaire designed by GTplan and applied to PC-IDEA member countries in 2011 related to those themes, the activities to be carried out by PC-IDEA until 2013 have been detailed and are also included in this paper. Areas to be especially addressed are capacity building, standards and technical specifications and best practices and guidelines for the development of SDI. e-government services to support spatial planning through an effective exchange of geo-information between involved parties (86) Marije Louwsma Land administration is one of the domains in which the use of geo-information in e-government services plays an important role. Land administration data, a national key register part of the spatial data infrastructure, are not only used for its primary purpose, namely guarantee legal certainty, but also to guide spatial development. Regional governments use land administration data as a basis to implement policy goals. They do so for example through land consolidation projects. Re-allotment of land and the accompanying property rights is the main instrument of land consolidation to achieve a suitable land allocation that is in line with the spatial policy. The Dutch cadastre supports provinces in land consolidation projects drawing upon their expertise on (the use of) land administration data and legal certainty. The introduction of a national spatial data infrastructure opens possibilities for new applications that enhance a better exchange of geo-information between involved parties in land consolidation projects. E-government services are considered to have advantages for both parties involved. It increases an efficient and effective use of geo-information, which benefits the government (province). Especially the possibility to ask citizens (title holders) to provide geo-information through the internet allows a more automated processing of the data, which consequently increases efficiency and reduces mistakes. The benefit for involved citizens lies in a better access to information and the presented or asked information can be tailored to their needs. In order to facilitate the development of e-government services an extensive empirical study has been conducted to assess the feasibility of e-government in land consolidation projects to exchange geo-information between concerned government (province) and citizens (title holders). A user-centred design process was adopted to develop and assess the web service. Investigating the user needs is important to maximise the use of the proposed web service and to be able to tailor the design to the user needs, consequently optimising usability. The user-centred design process consists of, among others, analyses of the government requirements, use and user requirements and the context in which the e-government service operates. The overall conclusion of the study is that it seems feasible to develop a web service that enables the exchange of geo-information between title holders and province. However, it should be complementary to the existing manners to exchange geo-information between involved parties, as indicated by the interviewed provinces and the group of respondents of the user survey that prefers other ways of contact than through a web application. In this paper we use the results of this study to reflect on some of the important issues of the design and use of e-government applications. The adopted user-centred design process instead of the often applied technology driven approach will be discussed. Furthermore, we focus on the online submission of geo-information by citizens to government, e.g. regarding land lease contracts or wishes regarding the new land allocation. Spatially enabling e-government applications using an Open Source Geographic Information service platform (148) Jani Kylmäaho, Antti Rainio Common reference architecture for e-government services has been defined in Finland along with supporting Geographic Information (GI) reference architecture. Based on the GI architecture, National Land Survey of Finland is building a GI service platform. In first phase, already in production, the platform can be used to publish basic functionality embedded map clients using common web content management systems. All map data consumed by the client is provided through standardized OGC service interfaces, such as WMS. In second phase, the functionality of the published map clients is increased to enable user interaction with various background systems offering service interfaces. The user will be able to draw information on the map and save the results as private information or as information to be viewed e.g. by an authority. In third phase, the platform may offer a map-based user interface for spatial analysis using e.g. statistical data and algorithms invoked using standardized WPS procedures and using data from WFS services. The platform is based completely on Open Source products and is being developed using agile software development methods. The source code of the platform has been made available free of charge using EUPL and MIT dual licensing scheme. The platform’s modular architecture has enabled the formation of a development network, where multiple development projects are being run by various governmental organizations. The projects can use the basic functionality of the platform as a basis for making new bundles of functionality for more specific use cases. Further, development projects contribute the new enhancements back to the platform repository, from which they can be reused under an Open Source license. Geo-spatial Technology Based Cadastral Mapping Services and Solution: A Case Study of the State of Sikkim in India (53) L P Sharma Government of India has launched an ambitious programme called National Land Record Modernization Programme where digitization of cadastral maps and its integration with the Records of Rights (ROR) is a key component. The state governments are facilitated with financial and technical supports to enable them to achieve this goal. The State of Sikkim lies in the north eastern part of India in the Himalayan range. The state has 2600 cadastral maps for nearly 456 revenue villages that were prepared after the survey of 1978. These maps were initially scanned and digitized in autocad (dwg) format in 1997 and were archived for reference. These maps have been now converted to shape files and are geo-referenced in ArcGIS to produce mosaic for revenue villages, revenue circles, subdivisions, districts and for state. Few other basic GIS operations and functionalities include Digital Verification of Scanned Images, Digital Verification of Vector Data viz. Digitization Errors such as undershoot/ overshoots, dangles, silver, missing plot ids etc. And finally Import of Vector Data to Spatial Database. Both raster and well as vector mosaics are maintained in parallel. In present scope of work, cadastral maps are maintained within village boundaries with proper village index defining association, direction and orientation among plots constituting the village. This is to ensure “whole to part approach” and keeping errors confined to village boundaries. An open source GIS software tool called Bhu-naksha has been developed to further take care of the map updation, mutation and linking with the records of rights. The mutations of parcels for the mutations that took place from 1978 to 2011 are being taken up revenue village wise. The moment the cadastral maps for one revenue village is fully updated incorporating the latest mutations, further mutations on any parcel for this village is a part of the online mutation process. The Bhu-naksha software tool has options for divisions of parcels using one or more methods that can divide the parcel in question into desired number of plots with desired directions, areas and topology. A citizen applies for the mutation of his land in a counter where his details are entered and a receipt is generated bearing a number. If all papers are in order he will receive his computer printed and signed record of right and the map of his plot of land within next half an hour. Even when there is no mutation, a citizen can walk in any number of times and walk out with copy of computer printed map by paying the requisite map fee. The system caters to both Government of Government (G2G) as well as Government to Citizens (G2C) services in wholesome open source GIS domain. Establishment of the Census Geography in Chinese Taipei (69) Bor-Wen Tsai, Chin-Hong Chen, Jeremy Shen Census data is able to derive valuable information for government policy-making. However, there was no dedicated system for census data in Taiwan. Data on individual units (individual person/household) was aggregated by jurisdictional units either in text form of digital format or in tabular form of report. Critical issues are data aggregated by jurisdictional units is usually too large to provide detail information on local area of interest and is difficult to illustrate spatial distribution and variation. This paper reports an establishment of census geography in Taiwan. It becomes a part of national spatial data infrastructure of Taiwan’s National Geographic Information System (NGIS). Census geography is a mechanism for census data to be associated with spatial location explicitly. It implies the spatial allocation and spatial aggregation of census data. A geo-referencing system for spatial allocation and a hierarchical architecture of census geographic areas are evaluated and designed. This architecture is able to provide spatial-explicit and detail information for census data and is able to conform to the existing jurisdictional system as well. The designed architecture comprises a statistical area and levels of dissemination areas for data provision in terms of different level of detail. In addition to census data, custom dissemination areas based on the identical statistical area are allowed to serve social-economic data provision such as crime or health data. The statistical area is the basic unit for data aggregation. The major concerns are protection of privacy and size of the unit in terms of spatial and attribute consideration. The dissemination areas are units for data distribution. There are four levels of detail for different applications. The evaluation of existing NGIS database is also conducted to make best use of existing data for the implementation of census geography. Parallel Session 5.2 (Room 204AB) GEOIDE contributions to public health and environmental risk A spatio-temporal data mining framework for analyzing the infectious disease risk: a case study of WNV in Ontario (169) Dongmei Chen, Masroor Hussain Understanding the spatio-temporal patterns of infectious diseases is of great interest in disease monitoring and management. With the increasing number of spatial-referenced disease surveillance and environmental data available for infectious disease study, how to best use these data for disease risk evaluation and modeling has become a challenge in last ten years. In this study we present a spatio-temporal data-mining framework for analyzing the spatial and temporal patterns of diseases and their association with climate and environmental risk factors. We use West Nile Virus (WNV)-related data in Ontario as an example to illustrate this analytic framework. WNV is a mosquito-borne flavivirus transmitted disease. It was first detected in America in 1999 and spread through North and South America since then. Mosquito and bird population play key roles in transmitting this disease. The Ontario Ministry of Health and Long-term Care (OMHLC) has built a surveillance system to collect and test mosquito species and number at different locations across Ontario in order to monitor the WNV risk. The mosquito data, collected from the WNV surveillance program by OMHLC from year 2002 to 2009 is used in this study. However, this data is limited by its locations and errors in the collection and testing method. In order to predict the distribution and mosquito population, we have to consider the spatial and temporal distributions of factors that contributing to the growth and survive of mosquitos including climate conditions, variation in elevation, vegetation and land use. Using spatipo-temporal data mining framework, we explore the relationships between mosquito growth and the weather, the climate and the land cover variables. The land cover variables extracted from remotely sensed data, the climate conditions and the weather data is integrated with mosquito surveillance data to analyse the patterns. The landscape is grouped to different risk units based on past environmental and weather conditions and mosquito patterns. Mathematical or statistical predictive models can be developed for each risk unit and then integrated to the data mining framework. The results from this framework can be plugged into a decision support system for early warning and real-time simulation of WNV risk. The spatial distribution of mosquito abundance in Peel Region under weather and environment conditions (253) Yurong Cao The changing global climate raises the growing public concern that climate and environmental changes can significantly affect the mosquito abundance and the spread of mosquito-borne diseases. The abundance of vector mosquitoes, which is driven by many environmental and climate factors including temperature, elevation, vegetation, precipitation, land use, and etc., is a crucial attributing factor to the mosquito-borne diseases outbreaks. Using the mosquito data from the surveillance program managed by Ontario Ministry of Health and Long-Term Care, we study the distribution properties of Culex.pipens/restuans mosquito abundance in Peel Region, Ontario for the period from 2004 to 2011. Generalized linear model is employed to explore the relationship of mosquito abundance with weather and landscape factors. After classifying the mosquito-traps in Peel Region into two clusters, a dynamic landscape model is built in GIS to group, visualize and identify the potential high risk regions for West Nile virus. A Network Model to Estimate Early Spatial and Temporal Dynamics of 2009 H1N1 outbreaks in Greater Toronto Area (GTA) (239) Wenyong Fan, Dong Mei Chen The 2009 influenza A (H1N1) pandemic has caused serious concerns worldwide due to its high mortality rate as announced by WHO and new characteristic feature, e.g. higher infection rate for youth. Theoretical and empirical models have been developed to understand the epidemic dynamics at different geographic scales. Transportation theory originated network models are applied and validated for international and national scale. However, the application of a network model is almost absent for small geographic scale, such as urban scale, possibly due to lack of social contact and commuting data that can be linked with disease diffusion. In this study, we present an approach integrating a network-based model and a Generalized Linear Model (GLM) to analyze the outbreaks of H1N1 in Greater Toronto Area (GTA). Spatial and temporal heterogeneity, social-economic variables are counted respectively in the network model to estimate the epidemic dynamics. The GLM that consist of a linear predictive equation, a logarithm link function, and a negative binomial distribution of counted disease cases is used to estimate the statistical efficiency of the network model. As such, H1N1 outbreak occurrence data from April to June in 2009 of Great Toronto Area (GTA), which present the early stage of the epidemic, is used for fitting our model. Twofold goals are projected in this study: 1) estimate the spatial and temporal dynamics of the influenza, and 2) assert any possible validity for such application within urban scale. Exploring the potential of mobile augmentation, situated geomatics and citizen sampling to enhance resilience in communities at risk from tsunami hazards (243) Nick Hedley, Calvin Chan The Pacific Coast of North America is at risk from local and tele-tsunamis propagated by seismic events in the Cascadia Subduction Zone (CSZ). The interval between a CSZ earthquake and tsunami inundation of coastal communities in British Columbia will likely be similar to that of the 2011 Tohoku tsunami. One strategy towards mitigating tsunami risk is public education. Providing citizens with grounded, relevant information tools and experiential evacuation learning interfaces may enhance their ability to perceive risks and make decisions during real evacuation events. This research explores ‘situated geomatics’, mobile spatial interfaces, augmented reality and geospatial game design and how they may enhance the resilience of coastal communities using new forms of spatial data infrastructure. Our prototypes were developed in the community of Ucluelet on the Pacific Coast of Vancouver Island, British Columbia. Three spatial interface projects explore the potential of ‘situated geomatics’, mobile augmented reality and geospatial game design in the tsunami hazard problem space - in the community of Ucluelet, on Vancouver Island, British Columbia. EvacMap is a location-aware iPad-based interface allowing users to interactively browse between different evacuation maps of Ucluelet (evacuation by distance, time, transportation type). VAPoR is an iPhone-based mobile interface tool that allows us to capture and map community perceptions of risk and evacuation – enabling us to capture collective mental maps of risk perception and evacuation from permanent residents and visitors. The SMARTEE initiative demonstrates how we can allow citizens to view everyday spaces augmented with GIS-derived risk overlays and evacuation augments. By enabling citizens to view everyday spaces through geomatic filters, leverage mobile spatial interface technologies to sample and map perceptions of risk, and explore how augmented interface experiences may help communities make new connections between science and real landscapes, we believe that we might be able to help communities build enhanced resilience to tsunami hazards through mobile, interactive and situated spatial enablement. We report on the design, rationale, development, implementation and field experiences of these initiatives. The role of the Geoweb in measuring the determinants of injury (112) Prestige Makanga Injury surveillance systems serve to gather data on injury occurrence as well as a range of clinical data. Injury surveillance focuses on acute injury and is used to monitor trends, detect emerging problems, identify interventions and assess the efficacy of interventions. These systems are patient based and data is collected primarily from the cases that appear at trauma units within hospitals. The primary purpose of injury surveillance is to aid injury prevention plans and they work well to identify where the injury burden is greatest as well as the types of injuries that are prevalent. There is an emerging consensus among researchers that the Public Health approach to injury prevention, which consists of a sequential process of surveillance, risk definition, developing countermeasures and then implementing a prevention program, has had only limited success in translating epidemiological findings into actionable policy instruments that aid injury prevention. Understanding the broader context for injury risk is perhaps a necessary element of injury surveillance. This paper advocates supplementing traditional injury surveillance with a community based dimension to understanding the determinants of injury by taking advantage of the Geographic Web (Geoweb). Cape Town South Africa, which has one of the highest injury burdens in the world, is used as a case study in this research. A Geoweb application is developed that facilitates public participation in creating data on chosen social determinants of injury which include, location of legal and illegal outlets, community perceptions of safety, as well as characteristics of the built environment that may cause traffic accidents. These data are analysed with trauma data to understand the extent to which these determinants actually contribute to injury occurrence. The results of this research further cement assertions that measuring the determinants of injury is an important part of injury surveillance. Parallel Session 5.3 (Room 2103) Industry Meeting See meeting agenda. Parallel Session 5.4 (Room 205B) Legal, Economic and Institutional Challenges IV Geonode and the Australian and New Zealand Spatial Marketplace: An Evolutionary Step in the Discovery, Access and Utilization of Spatial Resources (70) Jeffrey Johnson, Cathy Crooks The Australian and New Zealand Spatial Marketplace (the Spatial Marketplace) will be a distributed, online hub of location-based data, products, services and processes, drawn from many sources that aims to ‘mainstream’ spatial resources, allowing them to be applied across government and industry sectors.The current lack of a single, consistent regional platform for publishing and access means that spatial resources are not yet achieving their full potential for economic, social and environmental transformation. The Spatial Marketplace will leverage the collective capability offered by existing technological and institutional environments to create a simpler, less expensive and more broadly relevant infrastructure. This presentation aims to describe the development efforts behind the Spatial Marketplace and how they will benefit SDI building and data sharing across private and national, regional and local governmental authorities across Australia and New Zealand. Working with OpenGeo and its open source GeoNode software stack, the Spatial Marketplace will: • allow easy online publishing and distribution of spatial resources; • allow organisations that produce spatial information to make their information publicly accessible without complex proprietary processes or costly infrastructure; • provide an accessible, easy to use services environment that allows easy discovery and access of spatial resources; • shorten the supply chains for accessing resources, encouraging innovation, enabling regular content update, and creating new roles and opportunities as value adders and resellers for users of spatial resources; and • offer a social networking, community environment for the transaction of ideas, information and services related to the location industry. GeoNode is widely used to create collaborative, SDI-building capabilities in support of multi-hazard modeling and disaster risk assessment by the World Bank, the Global Earthquake Model Foundation, the Australia-Indonesia Facility for Disaster Reduction and others. The Spatial Marketplace aims to supplement the current GeoNode capabilities to provide complete publishing, discovery, access, distribution and interoperability services for all spatial information resources in Australia and New Zealand. Innovation in the following key areas will be required to achieve this outcome: • Expansion from a predominantly data focus to inclusion of all spatial resources – data, products, services and processes; • Transition from a predominantly public sector focus to meet the needs of all sectors – public, private, academic and community sectors; • Transition from sectoral and jurisdictional silos to a single integrated regional Spatial Marketplace for Australia and New Zealand; and • Transition from monolithic roles to discrete roles in a spatial resource value chain or network. The Spatial Marketplace is jointly sponsored by the public, private and academic sectors. The Australia New Zealand Spatial Marketplace Steering Committee, responsible for the delivery of the pilot, has representation from the public, private and academic sectors – ANZLIC representing the Federal and State Governments of Australia and New Zealand, the Public Sector Mapping Agency (PSMA) Australia, the Spatial Information Business Association (SIBA) Australia and New Zealand, and the Cooperative Research Centre for Spatial Information. To develop the Spatial Marketplace, the Steering Committee will implement a software stack based on GeoNode to make it extremely simple to share data, automate metadata creation and versioned metadata, provide for secure distribution and promote ‘living’ data. Avoiding Disasters: The challenges in creating a robust infrastructure for Cables and Pipelines (125) Caroline Groot-Pennekamp, Vanessa Gosselink-van Dijk Since July 2010 a nationwide spatial data infrastructure (SDI) for the exchange of cable and pipeline information is available in the Netherlands. Every excavator within the Dutch borders is obligated to require actual, digital and geo-referenced information on cables and pipelines. This is enabled through the Underground Cables and Pipelines Information Exchange Act, also known as WION. The goal of the Act is to prevent accidents in excavation areas. Less accidents will improve the safety of citizens and excavation workers and it will prevent both financial and economic losses due to damage on cables and pipelines. Especially the latter is becoming more and more important, as a result of the increasing dependency of society on the use of internet. This paper describes the challenges involved in compiling the appropriate, actual, complete and easy-to-understand - geo-information for excavation usages and also addresses the solutions selected. To overcome these challenges multiple hurdles were taken. For instance, in order to get complete information, an important hurdle was taken through the transformation from a voluntary system to a mandatory system. A new system was created with the already excisting information centre for excavators - founded the nineties by the major utility owners - as a starting point. This decision was made by the Ministry of Economics Affairs, responsible for the telecom and utility sector, with the commitment to introduce legislation on the information exchange. In order to get the information on time to the right spot, a governmental agency was selected by the Ministry to become the independent intermediary between network agencies and excavators. This is the Dutch Kadaster, now responsible for the information exchange. Another agency is acting as supervisor of the whole information exchange system. The next SDI challenge was to design a simple portal for ordering and retrieving geo-information. The Dutch Kadaster designed a user friendly interface based on demands from the excavation community. It was also very important to create an easy to use and legible digital map. All parties involved were consulted and together they designed a new information model including cartographic representation based on international and national standards.These models make it possible to combine the large scal base map of the Netherlands with all individual maps from the various cable and pipeline network agencies into a multi-layered map that is easy for excavators to use. The last challenge was to create a robust and future-proof SDI. It is important to have solid funding for the SDI. By selecting the right business model for financing the exchange of data, this important challenge was met. Through cost recovery, a fee per request for information, it is now possible to build and maintain a robust SDI for the future. In 2011 more than 400,000 information requests and deliveries (combined maps) were successfully dispatched. This is an excellent base for innovation for the near future. Towards Sustainable Stewardship of Digital Collections of Scientific Data (130) Robert Downs, Robert Chen The digital revolution has vastly increased the ability of the scientific community to collect and store a tremendous variety and quantity of data in digital form, representing a potentially irreplaceable legacy that can support scientific discovery and scholarship in both the present and the future. However, it is not yet clear what organizations or institutions can and should maintain and store such data, ensuring their long-term integrity and usability, nor how such long-term stewardship should be funded and supported. Many traditional information preservation and access institutions such as libraries and museums are struggling to develop the skills, resources, and infrastructure needed for large-scale, long-term digital data stewardship. Government agencies often have strong technical capabilities, but are subject to political and budgetary pressures and competing priorities. Private organizations and companies can bring to bear innovations not only in technology but also in economic approaches that could provide financial sustainability. Developing long-term collaborative partnerships between different types of organizations may be one approach to developing sustainable models for long-term data stewardship. The development of objective criteria and open standards for trusted digital data repositories is another important step towards sustainable data stewardship. A critical challenge is the development of viable economic models for ensuring that the resources needed for long-term stewardship are put in place, while at the same time addressing the needs of the scientific community and society more generally for open access to scientific data and information resources. The development of a robust spatial data infrastructure can not only help reduce both the short- and long-term costs of data stewardship, but also provide a framework for the establishment and evolution of trustworthy data repositories that will be available for future generations of users to discover, access, and use the scientific heritage that is being created today. Bridging the Gap Between Traditional Metadata and the Requirements of an Academic SDI for Interdisciplinary Research (254) Claire Ellul, Daniel Winer, John Mooney, Joanna Foord Metadata has long been understood as a fundamental component of any Spatial Data Infrastructure, providing information relating to discovery, evaluation and use of datasets and describing their quality. Having good metadata about a dataset is fundamental to using it correctly and to understanding the implications of issues such as missing data or incorrect attribution on the results obtained for any analysis carried out. Traditionally, spatial data was created by expert users (e.g. national mapping agencies), who created metadata for the data. Increasingly, however, data used in spatial analysis comes from multiple sources and could be captured or used by non-expert users – for example academic researchers - many of whom are from non-GIS disciplinary backgrounds, not familiar with metadata and perhaps working in geographically dispersed teams. This paper examines the applicability of metadata in this academic context, using a multi-national coastal/environmental project as a case study. The work to date highlights a number of suggestions for good practice, issues and research questions relevant to Academic SDI, particularly given the increased levels of research data sharing and reuse required by UK and EU funders. Parallel Session 5.5 (Room 2104B) Spatially Enabling Industry I Earth observation satellites helping out electric utilities (240) Claire Gosselin Electric utilities need a variety of information to manage their electricity transmission and distribution networks. These are necessary for activities such as visual asset inspection, vegetation management, transmission line route selection, outage scouting and damage assessment, building project prioritization, infrared analysis of lines and stations, analysis of ground deformation and so on. Some of these tasks are carried out using conventional aerial photography, lidar, helicopter-based videos etc. or simply by sending out field crews along the transmission and distribution lines. Optical high-resolution satellite imagery technology can be an interesting alternative to conventional means of photographic and laser surveying and/or ground or helicopter patrols as a tool for many aspects of the daily activities of distribution and transmission line management. The resolution of satellites such as GeoEye-1, WorldView-2, QuickBird and IKONOS is now approaching the mapping resolution that could previously only be provided by aerial photography. Already extensively used by forestry organizations/agencies, many power companies already use it for research, development and environmental analysis, and in some countries, utility companies are beginning to use it for monitoring transmission line rights-of-way. The major benefits of this technology include its relatively low cost, rapidity of data acquisition and the fact that it requires little, if any, ground- or airborne-based inspections. Earth observation satellites can be easily programmed to acquire new images within a specified time window and according to the requested specifications. We examine in this presentation to what extent existing, usually costlier, practices can be replaced by EO-based data. Unmanned Aerial Mapping Solution for Small Island Developing States (238) Raid Al-Tahir, Marcus Arthur Developing countries are characterized by rapid urban growth and dynamic changes in land use patterns. The majority of this urban growth in small island developing states (SIDs) occurs in coastal areas and other environmentally sensitive areas. Knowledge and mapping of urbanization and other land use trends provides critical information in support of sustainable development and environmental protection. On the other hand, deficiencies in spatial data and their currency create challenges for informed decision making in these regards. Remote sensed data from airborne and space borne sensors provide a significant source of spatial information for the national medium/large scale topographic and land cover maps. Despite the obvious benefits and seeming ease of mapping using these techniques, they remain underutilized by the small islands due to mainly the high costs and the required specialized personnel and equipment. Unmanned Air Systems (UAS) provide a viable and affordable alternative, especially for small area coverage. When compared to conventional satellite and airborne imaging, unmanned air systems have the advantages of providing more flexible, rapid, efficient, and weather independent data acquisition. This paper presents an overview and appraisal of the technology considering the cost factors and the operational nature in the tropical small developing islands of the Caribbean. The paper proposes and evaluates certain criteria for the consideration of a suitable system for the region with a focus on maintaining a low cost system while still achieving adequate accuracies and response speeds. Finally, the paper deliberates on the relevant workflow and photogrammetric aspects of image acquisition and processing. Moving Vehicle Extraction from One-Pass WorldView-2 Satellite imagery (230) Rakesh Kumar Mishra, Yun Zhang Moving vehicle extraction is key source of information for traffic planning, security surveillance, and military applications. The high resolution images acquired by the modern satellites such as QuickBird and WorldView-2 made it feasible to use spaceborne data for the vehicle extraction. In this paper, a new technique is presented to extract moving vehicles from the one-pass WorldView-2 satellite images. The WorldView-2 satellite has three sensors: one Panchromatic and two Multispectral (MS-1: BGRN1, Pan, and MS-2:CYREN2). Because of a slight time gap in acquiring images from these sensors, the WorldView-2 images capture three positions of a moving vehicle. Therefore, theoretically, it is possible to extract moving vehicles from MS-1, Pan, and MS-2 image using change detection analysis. However, practically, this extraction brings many challenges. The different resolutions of Pan and MS images and low resolution (2m) of MS images make moving vehicle extraction more challenging. Another challenge is the time interval between Pan and two MS images is very low. Therefore, there is a very small sift of a moving vehicle appears in these images. Due to the varying scales and relief distortions in the co-registered Pan and MS images, the existing change detection methods are incapable to detect moving vehicles from the images. In order to overcome the aforementioned challenges, this paper proposes a new methodology to automatically and accurately extract moving vehicles from MS-1, Pan, and MS-2 images captured by the WorldView-2 satellite in one pass. A new motion detection algorithm using Principal Component Analysis has been developed which looks into MS-1 and MS-2 images and detects the objects which are in motion. The novelty of this algorithm is that there is no need of road extraction prior to vehicle extraction. In earlier methods, there is a need to extract roads either manually or using GIS data. The methodology has been tested on a highway scene of the WorldView-2 imagery and the accuracy of the results is demonstrated. Determining culvert locations and size across bare-earth LiDAR-DEMs, automatically (188) Charity Mouland, Kim Wen, Jae Ogilvie This presentation illustrates how bare-earth DEMs can be used to determine ideal culvert configurations and sizes across road- and trail-accessed landscapes through automated means. Doing so is important to ensure road stability and to prevent inadvertent impoundments and related flooding on the upslope sides of roads and trails. The presentation focuses on automated ways to minimize the requirements for culvert installation through least-cost derivations of the road and trail locations, one road or trail segment at a time. All of this is facilitated by the LiDAR-derived wet-areas mapping approach, which takes advantage of available algorithms to map flow direction, flow accumulation, depth-in-sinks, and least-cost surfaces. To make essential hydrological connections, culverts are manually “burned” into the DEM before the algorithm is run so that water is channeled at potential stream/road crossings. To manually digitize each culvert (and ensure correct location and size) is a very tedious and time consuming task. This process can lead to substantial reductions in road and trail construction and maintenance costs, and in the lowering of hydrological risks that lead to flooding, soil erosion, and slope instabilities. The presentation will describe the tool in depth and will depict examples of its effective application in Prince Edward Island, New Brunswick and Alberta. The illustrations will also show how the process is used as part of daily GIS-based planning routines the deal with road and trail layout in forestry industry. Parallel Session 5.6 (Room 205C) 3DGeoInfo: 3D Data Acquisition Keynote Speaker Ioannis Stamos From the Volumetric Algorithm for Single-Tree Delineation towards a Fully-Automated Process for the Generation of Forests Arno Buecken and Juergen Rossmann When we introduced the volumetric algorithm for single-tree delineation at the 3D GeoInfo 07 in Delft it was already a powerful algorithm with a high detection rate and the capability to generate trees for forestry units with only limited user in-teraction. Even for the whole test-area of 82km2 this was convenient. But when test-areas grow it shows that even the limited amount of user interaction is a mat-ter. For large test-areas of 1000km2 and more it is essential to use a fast algorithm which can work without user interaction or at least with user interaction on the level of test-areas and not on the level of individual forestry units. In this paper we will show how to improve the computational complexity of the volumetric algorithm and how to automatically calculate the free parameter that was set interactively in the original implementation. We will use the Receiver Operator Characteristic (ROC) for the estimation of a heuristic, which is an approach that is being used to model and imitate the human decision process when it comes to making a parameter decision in statistical processes. It turns out that this method, which is commonly used in other fields of science, is also valuable for many other geo-information processes. A Service-Based Concept for Camera Control in 3D Geovirtual Environments Jan Klimke, Benjamin Hagedorn and Juergen Doellner 3D geovirtual environments (3D GeoVEs) such as virtual 3D city models serve as integration platforms for complex geospatial information and facilitate eective use and communication of that information. Recent developments towards standards and service-based, interactive 3D geovisualization systems enable the large-scale distribution of 3D GeoVEs also by thin client applications that work on mobile devices or in web browsers. To construct such systems, 3D portrayal services can be used as building blocks for servicebased rendering. Service-based approaches for 3D user interaction, however, have not been formalized and specied to a similar degree. In this paper, we present a concept for service-based 3D camera control as a key element of 3D user interaction used to explore and manipulate 3D GeoVEs and their objects. It is based on the decomposition of 3D user interaction functionality into a set of services that can be exibly combined to build automated, assisting, and application-specic 3D user interaction tools, which t into service-oriented architectures of GIS and SDI based IT solutions. We discuss 3D camera techniques as well as categories of 3D camera tasks and derive a collection of general-purpose 3D interaction services. We also explain how to eciently compose these services and discuss their impact on the architecture of service-based visualization systems. Furthermore, we outline an example of a distributed 3D geovisualization system that shows how the concept can be applied to applications based on virtual 3D city models. Parallel Session 5.7 (Room 2101) Innovations Geospatial Quebec La Géomatique au Service de la Responsabilité Sociale des Entreprises Michèle Laflamme - Borealis Il y a peu de temps encore, le concept d’éthique des affaires et de responsabilité sociale d’une entreprise de l’industrie extractive ne consistait qu’en des procédures administratives internes plus ou moins appliquées dans la pratique. De plus en plus, l’attention des médias sur le respect des valeurs humaines et environnementales pousse les compagnies à modifier leurs façons de procéder. Dans ce contexte, quelle est la place de la géomatique? Puisqu’elle permet une maîtrise de la position géographique, la géomatique est, depuis ses débuts, présente dans les grands projets miniers, pétroliers et gaziers. Mais parce qu’elle rend possible l’intégration de données en raison de ses outils d’aide à la décision, la géomatique peut-elle intervenir auprès de ces compagnies afin de les aider à améliorer la gestion de leurs parties prenantes et de leur responsabilité sociale? Si oui, comment? Boréalis s’implique dans des projets pour la gestion de l'impact social et environnemental lors de l'installation de grandes infrastructures. Pour ce faire, un bon nombre d'outils et de technologies touchant le domaine de l'intelligence d'affaires et de la géomatique ont été utilisés et développés. Notre solution vise à faciliter l'accès à l'information stratégique en responsabilité sociale et environnementale pour les grandes entreprises dans le domaine minier, pétrolier et gazier. Nos outils basés sur des technologies Web servent d'aide à la décision et à la gestion de la conformité. Notre solution permet : - D’afficher sur les carte des données corporatives provenant de différents sites / localisations à travers le monde. - D’afficher les données provenant de différents départements voire même d’organisations sur un même media. - De voir la mise en relation de données stratégiques pour les projets (ex. infrastructures, résultats de levés environnementaux, plaintes, incidents) permettant aux compagnies d’agir de façon proactive - Permettre aux bailleurs de fonds ainsi qu’aux différents partenaires stratégiques de visualiser les impacts réels des projets par le suivi d’indicateurs tout au long de la durée de vie du projet. - De visualiser les engagements pris par les compagnies ou les conditions imposées par les instances gouvernementales qui sont liées à l’utilisation du territoire. Le suivi de ces conditions ou engagements se fait à l’aide de nos outils (cartes interactives, rapports et tableaux de bord). Planification de la gestion et de l’entretien d’infrastructures à l’aide de la géomatique décisionnelle Marie-Josée Proulx - Intelli3 Dans une démarche de planification stratégique globale, les entreprises font face aux défis de réduction des charges de travail et des investissements tout en assurant une croissance afin de rester concurrentielles sur leur marché. Celles-ci font face à une pression supplémentaire afin d’augmenter leur rentabilité et de démontrer le retour sur leurs investissements. Depuis plusieurs années, l’Administration portuaire de Montréal a mis en place un programme d’inspection et d’évaluation des dommages de son infrastructure ferroviaire. Bien que la compilation des relevés annuels effectués par le véhicule d’entretien permettait d’identifier les différents défauts du rail et que la compilation des relevés des travaux d’entretien (ex. zones de colmatage) permettait de localiser ces activités, l’analyse combinée de ces informations demeurait complexe à mettre en œuvre l’aide d’un SIG. L’analyse, tant spatiale que temporelle, de ces informations sur l’état ou le fonctionnement des installations (ex. % réfection, % usure, nombre dommages, nombre travaux) nécessite la combinaison de plusieurs indicateurs au même endroit sur une longue période de temps pour justifier une intervention. Pour faciliter l’analyse de telles problématiques, les solutions d’intelligence d’affaires comme le tableau de bord, assurent une vision immédiate des données critiques. L’utilisation de la solution géodécisionnelle Map4Decision d’Intelli3 a été retenue afin de mieux visualiser les données à différents niveaux de détail, comme les secteurs où l’état des voies ferrées est critique et l’information détaillée de chaque segment du rail. Elle a aussi permis d’uniformiser la présentation des différents travaux et défauts sur une segmentation atomique et d'utiliser ces segments comme la base d'une analyse comparable afin de créer des données à un niveau plus général. L’administration peut ainsi prendre des décisions stratégiques plus rapidement et présenter le portrait général de l’information (ex. par secteur) ainsi que le portrait détaillé (ex. par quai, par voie, par parcours) dans le temps. Les différents modes de représentation supportés par la solution permettent maintenant aux gestionnaires de mieux évaluer l’état des infrastructures et aux intervenants responsables des travaux, des inspections ou des finances de consulter eux-mêmes l’information. Différents indicateurs seront exposés ainsi qu’une démonstration de la solution illustrant les bénéfices pour la planification de la gestion et de l’entretien d’infrastructures. Solution de cartographie mobile pour la police de Québec Jimmy Perron - NSim Technology Le service de police de la ville de Québec a procédé à l'implantation d'une nouvelle solution de cartographie mobile pour toutes les auto-patrouilles du territoire de Québec. Cette solution centrée sur la carte (« map-centric ») devait répondre à une multitude de besoins émis par le service de police. Elle devait premièrement intégrer et présenter la cartographie détaillée de la Ville de Québec. La solution devait se connecter sur plusieurs systèmes externes afin de présenter l’information de façon simple et rapide : · systèmes de répartition par ordinateur 911 ; · système de positionnement GPS ; · système de parcours (« routing ») ; · base de données de photos des bâtiments ; · base de données cadastrale ; · planification opérationnelle des événements spéciaux. La solution devait aussi permettre à tous les utilisateurs d’échanger de l’information en temps-réel directement sur la carte (géocollaboration). De plus, la solution devait être déployée dans un environnement où certaines contraintes posaient réellement un défi. La bande passante allouée était de 3 à 7 Mo par jour par véhicule sur des produits mobiles aux performances restreintes. La notion de convivialité a aussi été abordé et une interface utilisateur « touch » a été développée spécialement pour répondre aux besoins. Nous présenterons en détails les besoins, les défis ainsi que les solutions mises en œuvre par l'équipe pour le développement et le déploiement d'une telle solution dans l’environnement du service de police de la ville de Québec. Le rôle de FME et FME Server dans les processus de manipulation et de diffusion de données Lesley MacKenzie - Solutions Consortech FME 2012 et FME Server 2012 sont des outils puissants de traduction et de traitement de données géospatiales qui permettent aux organisations d’automatiser plusieurs processus. Dans son utilisation la plus connue, FME permet de faire des liens entre différents systèmes en faisant non seulement la traduction de formats de données spatiales mais aussi de schémas. Une organisation peut ensuite se servir de FME Server pour établir un portail de données pour ses clients internes et externes. Durant cette présentation, nous verrons comment FME se positionne au sein de plusieurs processus de gestion de données spatiales d’une organisation grâce à sa capacité de lire des données d’une multitude de formats vectoriels, matriciels, 3D et lidar ainsi qu’à ses outils puissants de manipulation et de validation. Des expériences et projets vécus serviront comme exemples de différents types de manipulation, parfois simples ou complexes durant lesquels FME a joué un rôle crucial pour répondre aux besoins des usagers et clients. Cueillette de données optimisée pour les STI (Systèmes de Transport Intelligent) Pierre-Paul Grondin - Trifide Group Qui dit « transports » dit aussi inventaires, analyses, surveillance, gestion; autant de thèmes et d’activités qui se pratiquent généralement en vase « clos », dans nos villes et municipalités les plus importantes. En même temps, nous parlons depuis 1997, année de la création de la Société de systèmes de transports intelligents, justement de transports intelligents avec tout ce que cela implique; surveillance, régulation, contrôle, gestion, diffusion, prévision, perception, coordination, suivi, entretien, traitement, collecte, planification, comptage, détection etc., etc. Et tout ceci devra se faire de plus en plus en temps réel. Mais à quels coûts? Et dans quelles conditions? Comment? Quand? Les relevés et les analyses qui se font en transport s’appuient généralement sur des standards bien définis, reconnus et sont réalisés par des spécialistes; les ingénieurs civils sont les spécialistes de l’analyse de la chaussée, les arpenteurs-géomètres des relevés certifiés, sans oublier les spécialistes de la sécurité routière etc. Les temps de réalisation sont tels toutefois qu’il est impensable de les associer au concept du « temps réel ». La cartographie mobile, une discipline toute jeune et qui, de par les équipements qu’elle incorpore, ouvre une voie d’intégration intéressante, pourrait constituer l’un des éléments de solution. Le système inertiel, nécessaire à la précision du positionnement, pourrait très bien déterminer l’indice de confort au roulement. La retro-réflectivité mesurées à l’aide des scanneurs laser, bien qu’elle ne rencontre pas les standards MUTCD, permet de différencier aisément la qualité des surfaces réflectives, donc applicable en signalisation routière, tant les panneaux que le marquage. Les vues vidéo obliques générées pour les fins de la cartographie mobile proprement dite sont déjà utilisées pour l’analyse des chaussées. La détection, l’identification et le positionnement de certaines infrastructures (glissières, panneaux de signalisation) se fait aisément par des procédés de reconnaissance dans l’image ou encore une fois, à l’aide des données laser. En résumé, nous connaitrons, d’ici quelques années, l’intégration de certaines disciplines dont le but ultime sera l’efficacité, la rapidité et la réduction des coûts, ce dernier représentant un atout pour les plus petites municipalités et les MRC. Parallel Session 6.1 (Room 205A) Spatially Enabling Government VI Nature Conservation and Biodiversity Monitoring Strategy through a Spatial Data Infrastructure (89) Cristina Oana, Cristian Vasile -Registrd, Simona Staiculescu [paper: refereed proceedings article] One of the main concerns of the European Union Environment policy is dedicated on the deterioration of natural habitats and the threats posted to certain species under the Habitat Directive. Romania is also in the process of developing the Spatial Data Infrastructure in response to Geographic Information 2000 Initiative launched by the European Commission in 1996 and the Infrastructure for Spatial Information in Europe (INSPIRE) initiative launched in 2001. The focus of this article is based on inventorying, mapping natural habitats and wild species of Community interest and implementing a geoportal application to help users to share the geospatial information in the environment and sustainable development sectors. Landscape-wide LiDAR Mapping of Vegetation Type by Soil Moisture Regime (140) Doug Hiltz The ability to accurately predict sites where different rare and invasive vegetation species are likely to be found or have the potential to establish has widespread natural resource management implications. As efficient land use planning becoming more and more a concern, the ability to identify high risk vegetation areas is much more important. This presentation shows how LiDAR-based point-cloud data can be transformed nto a high-resolution soil moisture regime maps as affected by the DEM-derived depth-to-water index (DTW) as well as slope and aspect. The verification of the resulting map was done by way of ridge-to-depression vegetation surveys, and noting species type and abundance and already catalogued species-specific soil moisture regime preferences within 1 m2 plots. All of this work was done for two case study areas in Alberta: the Wilmore Wilderness Park within the Foothills, and the EMEND area within the boreal plain north of Peace River. Using these data, potential habitat/risk maps can then be produced for invasive and rare species across the landscape. These maps will provide valuable a priori knowledge that can be used to streamline landscape management and planning before entering the field, saving time and money. Invasive species programs can be focused in areas where patterns show highest risk of spread. At the same time, areas can be weighted based on their potential to support rare vegetation species before being allocated for development in land use plans. Preliminary results have shown that vegetation types supported in given areas can be predicted with reasonable accuracy (approximately 70-80% correspondence). Also, that accuracy can be improved upon by taking into account factors like slope and aspect. However, there are still areas where the predicted vegetation type does not correspond to field observations. The future of this research will focus on further refining predicted patterns and definitively linking species of concern to these patterns. Building an On-demand Global Agriculture Drought Information Web Service System (142) Meixia Deng Liping Di, Genong Yu, Ali Yagci, Chunming Peng, Bei Zhang, Dayong Shen There is a growing demand on detailed and accurate assessments of agriculture drought from local to global scales in recent years. However, current agriculture drought information systems in the world are not able to meet this demand due to their limitations, such as only regional or country level coverage, very coarse spatial and temporal resolutions, no on-demand drought information product generation and downloading services, no online analysis tools, no interoperability with other systems, and ineffective agriculture drought monitoring and forecasting. In order to overcome the limitations and meet the growing demand from the societies, the Center for Spatial Information Science and Systems (CSISS) at George Mason University aims at building an open, interoperable, standard-compliant, Web-service-based, and on-demand global agriculture drought monitoring and forecasting system (GADMFS) (http://gis.csiss.gmu.edu/GADMFS/). The development of GADMFS leverages the latest advances in geospatial Web service, interoperability and cyber-infrastructure technologies and utilizes results of many existing and on-going researches, e.g. the GeoBrain (http://geobrain.laits.gmu.edu/) and National Crop Progress Monitoring System (http://csiss.gmu.edu/dss/) research projects. GADMFS will provide world-wide users with timely, on-demand, and ready-to-use agricultural drought data and information products as well as improved global agriculture drought monitoring, prediction and analysis services, with the availability of over 30 years global remote sensing data sources from NASA (e.g. MODIS data) and NOAA (e.g. AVHRR data). The system relies on drought related remotely sensed physical and biophysical parameters, such as soil moisture and drought-related vegetation indices (VIs, e.g., NDVI), to provide the conditions of global agricultural drought with severity index at high resolutions (up to 500m spatial and daily temporal). For the monitoring purpose, the system lively links to near real-time satellite remote sensing data sources from NASA and NOAA. Multiple NDVI-based agricultural drought indices, such as vegetation condition index (VCI), have been computed from the baseline and dynamics for drought monitoring. To conduct drought prediction, the system utilizes a neural network based modeling algorithm, trained with current and historic vegetation-based and climate-based drought index data, biophysical characteristics of the environment, and time-series weather data. The trained algorithm establishes a per-pixel model to produce on-demand drought prediction at ~1km or higher spatial resolution. GADMFS is a contributing component of Global Earth Observation System of Systems (GEOSS) to serve the GEOSS societal benefit area of agriculture and water. The implementation of GADMSF shows that open and interoperable drought related data services and processing services from this system have significantly increased the accessibility of remote sensing based agriculture drought information to the world-wide users. Such a system will also increase the utilization of drought indices related applications and researches in wide-range communities. Alberta’s Wet Areas Mapping Initiative: empowering spatial decision making within government, industry and community groups (176) Barry White, Paul Arp, Jae Ogilvie Sustainability of Alberta’s forested land base is under significant risk due to unprecedented land use challenges. Innovative and spatially based planning solutions that are economic, timely and ensure positive outcomes are urgently needed by Albertans and our industry partners. Alberta has been working closely with researchers at the University of New Brunswick since 2004 to test the effectiveness of a depth to water table mapping approach. These functional GIS-based datasets predict the location of small water channels such as intermittents, often as small as 20cm in width, and wet areas which are currently not easily known to resource planners but yet are sensitive to disturbance. Successful research trials in Alberta’s foothills and boreal regions have moved this approach from the research phase to full implementation. Efforts are underway to map approximately 20.5 million hectares of primarily forested lands in the foothill and boreal regions. The initiative is being lead by Alberta Sustainable Resource Development and mapping is completed by our research partners located at the University of New Brunswick. Alberta’s mapping process incorporates newly acquired LiDAR (Light Distance and Ranging) information to produce maps of superior quality and with a resolution of 1m. These new datasets are shared with multiple government departments and are enhancing the stewardship of Alberta’s forested landscapes. Most recently, Alberta has commenced research and mapping in the Athabasca Oil Sands Region and is exploring new and innovative applications within the Alberta energy sector. These spatially based applications include identification of sensitive lands and previously hidden hydrological features to inform access management and infrastructure placement. Applications may also include spill management and enhanced strategic planning that better incorporates hydrological risk. Efforts are also currently underway to enable citizen groups such as Water Advisory Councils and community based recreational planning teams better achieve their respective stewardship and recreational objectives. Ultimately, this innovative and spatially-based tool should serve all Albertans in minimizing their collective environmental footprint on our forested lands. Microwatersheds Codification for Canada Ghouse Mohamed S, Ramakrishnan V, Rathika G, Sakthivel Sree, Santhana Raman Place abstract here Parallel Session 6.2 (Room 206B) GEOIDE Demo Session GeoEduc3D: geomatics for gaming and learning (PIV-24) Nicholas Hedley, Sylvie Daniel, Rob Harrap GeoEduc3D project focuses on the design and development of gaming and learning-oriented tools based on geospatial technology. The project aims to use mobile and desktop hardware and software to build games where teenagers - both in classroom and in informal settings - experience urban space and learn about sustainability, climate change, and how geomatics is used in these fields and in game design. Our graduate students are working on many dynamic projects to explore the various opportunities and issues related to such applications allowing to better experience and understand space while being situated in it. During this demo session, we will present our latest prototypes developed on tablets. Some of these applications will have been specifically designed to be experienced by the conference participants in Quebec City. The first collection of interface prototypes that will be showcased combine 3D physics, geosimulation, geovisualization, geomatics, tangible spatial interfaces and MAR to allow students to interactively explore precipitation, watershed topography, and hydrology in everyday spaces. The second set of interfaces explore the potential of situated citizen sampling, mobile augmented reality (MAR) and geospatial game design for tsunami education – in collaboration with real communities. The final version of Energy Wars Mobile, an educational game about building energy efficiency, will be presented as well. A story of two case studies: comparing a Local Climate Change Visioning Process in Delta, BC and Clyde River, NU. (PIV-32) David Flanders Since 2006, UBC’s Collaborative for Advanced Landscape Planning (CALP) has been working with the community of Delta, BC. Since that time, our research has developed dozens of powerful GIS-based 3D visualizations; a library of static, semi-photorealistic images communicating potential future worlds of climate change, adaptation and mitigation. The work has been featured in featured in the news and television media for each and every year since the launch of the GEOIDE Strategic Investment Initiative (2006 – 2008). More recently, CALP has been involved in a visioning project with the Arctic Hamlet of Clyde River, Nunavut (2009-2012). Community needs, data availability, and an adaptable visioning process lead to different themes being discussed with participants, and different technology being employed. Google Earth digital globe technology with less photo-real but more interactive and accessible modeling was employed to construct and demonstrate community futures with community and government stakeholders. This demonstration session will allow attendees to manipulate and explore the modeling, compare and contrast these two case studies, the technologies they employ, their unique products, how they were useful, and the reactions they prompted within their communities. Integrating developmental genetic programing and terrain analysis techniques in GIS-based sensor placement system (SII-PIV-70) Vahab Akbarzadeh, Meysam Argany, Christian Gagné, Mir Abolfazl Mastafavi, Marc Parizeau Wireless Sensor Networks (WSNs) allow efficient sensing through the use of many devices distributed in a given environment. WSNs have received a lot of attention in the last years, given it is now possible to use these devices for a robust sensing close to the elements of interests in a more efficient manner than using a single powerful yet expensive sensor. However, the position of the sensors in the environment has a major role to play in the performance of a given WSN for a given task, as different positions of the sensors will lead to different levels of performances. Moreover, determining the optimal positions to use for a given task is a complex problem that can hardly be tackled by human in the general case, and thus require the use of algorithmic methods. Our GEOIDE project SII---PIV---70 aims at exploring and develop methods for optimizing the position of sensors in given environment according to a given performance criterion relevant for the sensing task at hand. For that purpose, we developed a variety of methods for optimizing sensor placement, going from deterministic algorithms (i.e. gradient--- descent) to stochastic black---box optimization (i.e. evolutionary algorithms), passing by geometric methods (i.e. based on Voronoi diagrams) and hyper---heuristics. We are also interested in evaluating the performance of these techniques according to the quality of the data we have on the environment (GIS maps) and the sensors used (sensor models). For the demonstration, we plan to make a 10---minute video, made specifically for the session, that will present the result of the application of our algorithms on real GIS maps, in order to illustrate the methods proposed and the results obtained. Participants to the project will be present at the booth, providing more explanations to the interested peoples on the scientific and technical aspects of the project. Transportation and Renewable Energy at Urban and Regional Scales: Spatial Data, Techniques and Tools (TSII-201) Rory Tooke, Sheraz Khan The TSII-201 project aims to create readily understandable visualizations of the travel patterns and renewable energy infrastructure of changing Canadian cities. The impacts and interconnections of these systems will be used to a provide community engagement support and policy analysis. The demo will provide an overview of the project's research questions, methodology and current visualizations. The first component of the TSII-201 project seeks to model and communicate the regional and local resource options for renewable energy. The demo highlights completed and ongoing spatial assessments and visualizations of the various renewable resources for Metro Vancouver. Highlights include: an overview of the spatial datasets available to practitioners wishing to model renewable energy resources; existing models and relevant data processing techniques and analysis; and interactive mapping and communication tools for informing non-expert decision makers of the important considerations related to regional energy generation. The second component of the project is improving the visualization representation of travel patterns and scenarios across the local and regional scales. The demo will present the methodology, data models and preliminary visualization of travel characteristics, such as travel mode, time, cost, GHGs etc in various urban contexts. Dionysius: A Random Relative of Prometheus (PIV-43) W. John Braun, Doug G. Woolford The Prometheus Wildland Fire Growth Simulator is a spatially explicit deterministic program which makes predictions of the evolving perimeter of a fire from one or more ignitions. Is has been successfully applied to a number of wildfires in real time as an aid to fire managers who must allocate crews and other fire-suppression resources in a budget-constrained environment. Prometheus now has a close relative: Dionysius, a research-level prototype, which is based on the same principles as Prometheus, but which incorporates a realistic level of randomness. The output from Dionysius is a shape which can be said to contain the true fire with a given probability. This will provide fire managers with the degree of uncertainty required to make better informed forecasts. Ice Classification Using SAR Imagery to Support Canadian Ice Service Operations (SSII-111) David Clausi MAGIC, or the Map-guided Ice Classification System, has been designed to correctly identify sea ice in synthetic aperture radar (SAR) imagery. MAGIC is designed specifically to read and interpret sea ice images using ice maps provided by the Canadian Ice Service (CIS), a government agency based in Ottawa. The key algorithm, IRGS, is able to accurately segment SAR imagery into consistent regions for subsequent labelling. Currently the work of developing ice maps is performed manually at CIS. CIS staff create ice maps, however, certain features cannot be accurately produced by a human operator and computer vision algorithms are necessary to perform such tasks. In addition, IRGS is being used to distinguish ice types from open water without the use or interpreted coarse sea ice maps. The imagery, generated by RADARSAT-1 and RADARSAT-2, are Canadian SAR satellites that produce images that provide unique information compared with visible band cameras. The project partners the University of Waterloo, the Canadian-company MDA that supplies the remote sensing imagery, and CIS, which provides the operational environment and analytical know-how. This is an excellent example of the university, industry and government synergy that is furthering research in Canada. A Distributed and Interoperable Sensor Web System Demonstration (SS-PIV-89) Steve Liang The next generation web-enabled sensor networks (Sensor Web) are anticipated to mimic the growth of the WWW. They are expected to grow exponentially in terms of complexity, with millions of heterogeneous nodes and billions of users connected at any given time. With the rapidly increasing number of large-scale sensor network deployments, the vision of a World-Wide Sensor Web (WSW) is becoming a reality. This new earth-observation system opens up a new avenue to fast assimilation of data from various sensors and to accurate analysis and informed decision making. This demonstration will showcase two sensor web systems. First, we demonstrate the GeoCENS (Geospatial Cyberinfrastructure for Environmental Sensing) system. With GeoCENS, users can maneuver a 3D sensor web browser, within a single virtual globe, in order to discover, visualize, access and share heterogeneous and ubiquitous sensing resources, and other relevant information. As of today (2012 April), through GeoCENS, researchers have access to data from more than 60,000 sensors, 1,750 real-time sensors and 2,800 Web Map Servers - and many more are being added each month. The initial idea of such sensor web system was originated from an early GEOIDE project (MNG#BER), titled "Web-based Sensing Networks for Environment Applications (2003 - 2005). Second, we will demonstrate the TrafficPulse system, a participatory mobile urban sensor web that harnesses voluntary use of smarphones as a cost-effective, ready-made and pervasive sensor web that allows us to query, model, understand, and visualize the city's mobility in near real-time. TrafficPulse is the research outcome of a current GEOIDE project (SII#89). Improved Global Web Map Visualization (SSII-109) Spiros Pagiatakis, Mir Abolfazl Mostafavi, Costas Armenakis, Gordon Plunkett, Dave Horwood, Steve Liang, Reda Yaagoubi, Han-Fang Tsai, Ignat Girin, and Lisa Anes With the worldwide use of web mapping, the challenge is how to efficiently and effectively portray geographic information to all users in a clear, undistorted and understandable form no matter what location on Earth is being viewed. However, web mapping technology utilization is not at the ubiquitous level yet, due in-part to well-known large systematic distortions introduced by the currently ubiquitous “Web Mercator Projection” (WMP) particularly near or at the poles (e.g., Northern Canada, Antarctica) where the Earth cannot be represented in the Mercator projection. While the general public may not know or understand distortions, knowledgeable users are concerned and often resort to the implementation of non-standard, unique, non-compatible and inefficient systems to mitigate issues. Organizations that provide consistent national coverage (such as Canadian federal government departments), are particularly concerned with this problem - because “what works in Toronto must also work in Tuktoyaktuk”. This project aims at building a theoretical foundation and technical solutions for visualizing the entire Earth on the web in a fast and seamless manner applicable for light client display devices. The proposed methodology from data to map projection to cache tiling and visualization and its applicability will be discussed and demonstrated.. 3DTown: The Urban Awareness Project (PIV-17) James Elder, Claire Samson, John Zelek, Ayman Habib, Jim Little, Gunho Sohn, Frank Ferrie and David Clausi See abstract for talk on same project. Parallel Session 6.3 (Room 2103) Experiences & Case Studies III Progress in the implementation of the Chilean Geospatial Data Infrastructure (141) Esteban Toha SNIT is a permanent institutional coordination mechanism to optimize the management of geospatial information in the country. It is equivalent to the Geospatial Data Infrastructure of Chile. SNIT was created by Supreme Decree No. 28 of Ministry of National Property, giving to this ministry the role of coordinator, and to the National Property Minister the presidency of the Council of Ministers for Territorial Information. Among the main functions of SNIT they can be mentioned: coordinate actions at national and regional levels, aimed at strengthening the institutional support required by the Spatial Data Infrastructure of Chile (SDI); providing access in a timely and expeditious way to geospatial information in the country ; promote the use of geospatial information in State institutions in the generation of public policies and decision making; provide a guiding framework for all institutions generators and users of geospatial information on norms, standards and technical specifications; and to strengthen and build capacities in generators, users and decision makers. Currently the work of SNIT is structured around the components of geospatial data infrastructures: institutional framework, information, technology and human capital. For each of these components a set of activities is ongoing, involving the Ministry of National Property as a coordinating body and multiple agencies that generate and use geospatial information. In the field of the institutional framework, one of the core products for the work in 2011 was the elaboration of a proposal of national policy for geospatial information, that allows identify the key issues that must be addressed by the government in terms of information management. In the ambit of information, work is focused on the availability of information in public agencies and the generation of technical documents referred to standards and technical specifications. It is important to mention that our country is undertaking a national project for the elaboration of Chilean norms for geospatial data, which will be ending in 2012. In the area of technology, efforts are directed towards the maintenance and continuous improvement of the platforms for publishing geospatial information, in particular, the Geoportal of Chile and supplier nodes. There has been progress in developing technical documents relating to the implementation of OGC standards and specifications for Web Map Services. In terms of human capital, the SNIT Executive Secretariat support by identifying and coordinating supply and demand for training in various areas of geospatial information. Professionals in this Executive Secretariat develop workshops throughout the country, but also identify training opportunities within the public sector, where experts in specific fields transfer the knowledge to their peers by means of workshops and seminars. GeoConnections and the Canadian Geospatial Data Infrastructure (CGDI) - An SDI Success Story (173) David Harper, Denis Poliquin, Elizabeth Leblanc GeoConnections connects people to geography to enhance planning, analysis or policy development. We help decision makers from all levels of government, the private sector, non-government organizations and academia make better decisions on social, economic and environmental priorities by improving the sharing, access and use of Canadian geospatial information—information tied to geographic locations in Canada. These benefits are realized by the establishment of a spatial data infrastructure (SDI), an on-line environment conducive to the free-flow of geographic information. GeoConnections provides the necessary geospatial leadership to foster the development of the Canadian Geospatial Data Infrastructure (CGDI), Canada’s own SDI. The GeoConnections program is a national initiative led by Natural Resources Canada and funded by the Government of Canada. A Spatial Data Infrastructure for Senegal (221) Claude Garneau, Denis Haché, Tamsir Ba A Spatial Data Infrastructure for Senegal Abstract Land management is a major issue for Senegal. The impacts of climate change are being felt in key areas such as agriculture, water resources and urban development. On the other hand, the expectations of governance, security and sustainable economic development, which are core concerns of the African Union, are putting greater pressure on organizations involved in land management. Implementing a global spatial data infrastructure for Senegal (IDG/S) will go a long way to solving these issues. Such an infrastructure will make it easier to access spatial information to both Senegal government Agencies and to businesses and citizens, particularly via mobile devices, which are widespread in Senegal. Such services will provide access to spatial information of national interest and allow it to be displayed and analyzed. Every Senegal government Agency will be able to superimpose its own spatial information on it using recognized interoperability standards. The project is driven in Senegal by the Agence de développement informatique de l’État (ADIE), the Direction des travaux géographiques et cartographiques (DTGC) and the Centre de suivi écologique (CSE). On the Canadian side, the drivers are the Canadian International Development Agency (CIDA) and Natural Resources Canada. This conference presents the results of a joint Senegalese and Canadian team between November 2010 and April 2012 to define and promote the SDI. Findings derived from existing programs, priorities, policies and principles and interviews with as many as 50 agencies allowed the setting of the buildings blocks of a national SDI. The SDI has been defined to ensure proper governance and availability of geospatial data for all groups and communities. The team then established the foundation of a solution architecture that fulfills the many requirements and respects the capability and specificities of the Senegalese society. Among others, service based architecture, interoperability standards and Web 2.0 concepts were introduced as key elements. The conference also covers the implementation plan and the change management initiatives needed to facilitate government transformation, involvement of all partners, being governmental agencies, private sector and academics, as well as sustainability of Senegal’s SDI. The plan presents medium and long term perspectives. Therefore, the SDI is positioned as a large, strategic project that will allow Senegal to optimize the use of its resources and land from the perspective of sustainability and social and economic development and equity. Evaluation Of The Proposal Of A Spatial Data Infrastructure In Bahia-Brazil And Its Potential Repercussions To Environmental Impact Assessments (205) Fabiola Andrade Souza This paper is the result of a master's research was to assess the potential and major restrictions on the use of geographic data available in a Spatial Data Infrastructure (SDI) as a facilitator in the development of spatial representations in Environmental Impact Assessment (EIA) and its Environmental Impact Report (EIR), to promote compliance with the studies, communication with society and the realization of environmental licensing in order to become more assertive decision making by the public management, particularly with regard to Brazilian law. In this sense, it was considered a model of SDI consists of five elements, namely: data and metadata, technology, standards and guidelines, institutional political and actors. With reference to this model, we evaluated the Spatial Data Infrastructure proposed under the Government of the State of Bahia - Brazil (SDI-Bahia), and the basic geographic information needed for the preparation of spatial representations in an EIA / EIR, from the considerations raised by experts in environmental studies and the analysis of some existing Environmental Impact Reports on the state territory. It can be concluded that despite the IDE-BAHIA proposed technology, standardization and geographic data of interest to the environmental analyst and in accordance with actions implemented or in progress in the world (for example, in USA and Europe), their policies definitions and effective participation of stakeholders, still demands a greater level of detail and commitment to enable its implementation occurs and is actually useful to the development of spatial representations for Environmental Impact Assessment and Environmental Impact Reports in the state. Spatially Enabled Land – Marine Interface towards a Seamless Platform (242) Sheelan Vaez, Abbas Rajabifard [paper: refereed IJSDIR article] There is a need to build a seamless platform that underpins off-shore rights and responsibilities and sensibly matches its on-shore counterpart. This platform would facilitate more efficient and effective decision-making capabilities for any jurisdiction with land – marine interface. This paper discusses the potential for adding the marine and coastal dimensions to a Spatial Data Infrastructure (SDI), in the context of a seamless model resulting in a better and more integrated management of the land – marine interface. It provides an insight to the design and development of the Seamless SDI model by introducing Seamless SDI conceptual model. The Seamless SDI class and its inherited characteristics and properties will be discussed. In addition to the conceptual phase, the development of a Seamless SDI model also consists of two more stages: design phase and implementation phase. The Use Case Diagram and Object Diagram of the Enterprise Viewpoint will be developed. Further, it highlights the importance of the creation of appropriate Seamless SDI governance structures that are both understood and accepted. Finally, the feasibility of seamless platform towards spatially enablement will be addressed. This would help to develop an extended framework to support a spatially enabled jurisdiction covering the land – marine interface. Ideally this extended framework would result in harmonised and universal access, sharing and integration of coastal, marine and terrestrial spatial datasets across regions and disciplines. Parallel Session 6.4 (Room 205B) Spatially Enabling Citizens I An Assessment of the Contribution of Volunteered Geographic Information during Recent Natural Disasters (227) Kevin McDougall [paper: refereed book chapter] In recent years, improved information communication infrastructure (primarily the internet), the growth publicly available spatially enabled applications (such as Google Earth) and accessible positioning technology (GPS) have combined to enable users from many differing and diverse backgrounds to share geographically referenced information. In an increasingly spatial enabled society, user generated or volunteered geographic information is now becoming the first point of response in the immediate aftermath of a natural disaster. With the prediction of more severe weather events in the coming decades, emergency response personnel must be prepared to react quickly and utilize the latest information and communication technologies where appropriate. Crowdsource mapping platforms can be operation in a matter of hours of a natural disaster occurring and can utilize the information provided by citizens on the ground to collect timely and relevant information with respect to the disaster. Information can be contributed through multiple channels to inform others of the impact of the event. This paper examines the growth and development of volunteered geographic information over the recent years. The use of volunteered information and social networking in three natural disasters during 2011 are explored. The timeliness of the responses, the types of information volunteered and the impact of the information during and after the natural disasters are assessed. The relevance of these initiatives to the ongoing development of spatial data infrastructures and their contribution to formal response efforts and authoritative mapping is discussed. The Integration of Crowdsourced Data into Spatial Data Infrastructures: issues and future prospects (126) Barbara Poore, Eric Wolf, Greg Matthews The Integration of Crowdsourced Data into Spatial Data Infrastructures: issues and future prospects The National Map is a collaborative effort among the US Geological Survey (USGS) and other federal, state, and local partners to deliver topographic information for the United States. It is a primary component of the US National Spatial Data Infrastructure. Much of the vector data that comprise layers in The National Map were drawn from the 1-24,000-scale topographic maps published by the USGS, and improved upon by data collection efforts of the USGS and other partners. Realizing that many maps were outdated when digitized, the USGS has sponsored volunteer data collection activities since the mid 1990s. The Earth Science Corps allowed citizens to “adopt a quad,” updating existing features and colleting new information on a paper map. This program continued into the digital age as The National Map Corps, first with citizens collecting point-based structures data using GPS, and finally through an online mapping system. This program was suspended for budgetary reasons in 2008. Since then, web- and mobile-based technologies have made it easy for non-professionals to create, combine, and share maps. Recent events such as the earthquake that devastated Haiti in 2010 have shown that so-called called volunteered geographic information (VGI) can swiftly produce quality geographic information. In light of this rapidly changing technical landscape, the increasing use of social networking, and mandates for more transparency and citizen involvement in government, the USGS is revitalizing its volunteer program. This paper describes a pilot project the USGS has launched to test whether volunteered geographic information can be successfully integrated into spatial data infrastructures produced by experts. Some questions that will be addressed over the course of the project: What is the quality of VGI? What data types are suited for volunteer collection? How can VGI be integrated with authoritative data? How can volunteers be motivated and encouraged? What are the costs and benefits of VGI compared to standard data collection methods? How sustainable is VGI? We will present an analysis of data from two phases of this project. The first phase tested the suitability of the OpenStreetMap software suite and the Potlatch editor for simultaneous editing of transportation data in collaboration with the Kansas Data Access and Support Center. In the second phase, student volunteers from two Denver-area universities edited point-based structures data and added new points. During the two-month duration of the project, over 1000 points were improved or added. We will present quantitative analyses of the quality of these data and compare them to volunteered data submitted to OpenStreetMap over the same geographic area and time frame. This project reveals the importance of social aspects of crowdsourcing geographic data, and the critical role of aligning work practices between data producing organizations and citizen contributors. We emerge with more questions than answers as we look to the future. Will crowdsourcing change spatial data infrastructures or vice versa? Crowdsourcing Support of Land Administration (30) Robin McLaren Land Administration Systems (LAS) provide the formal governance structures within a nation that define and protect rights in land, including non-formal or customary institutions. Despite their pivotal support of economic development, effective and comprehensive LAS exist in only 50 mostly OECD countries and only 25 percent of the world’s estimated 6 billion land parcels are formally registered in LAS. This leaves a large section of the world’s population with reduced levels of security of tenure, trapping many in poverty. Missing and dysfunctional LAS can precipitate problems such as conflicts over ownership, land grabs, environmental degradation, reduced food security and social unrest. This security of tenure gap cannot be quickly filled using the current model for registering properties that is dominated by land professionals. There are simply not enough land professionals world-wide, even with access to new technologies. To quickly reduce this inequality we need new, innovative and scalable approaches to solve this fundamental problem and global challenge. This paper explores one potential solution to the security of tenure gap: ‘crowdsourcing’. Crowdsourcing uses the Internet and on-line tools to get work done by obtaining input and stimulating action from citizen volunteers. It is currently used to support scientific evidence gathering and record events in disaster management, as witnessed in the recent Haiti and Libya crises. These applications are emerging because society is increasingly spatially enabled. Establishing such a partnership between land professionals and citizens would encourage and support citizens to involve themselves in directly capturing and maintaining information about their land rights. Although citizens could use many devices to capture their land rights information, this paper advocates the use of mobile phone technology. Due to high ownership levels (5 billion licenses world-wide) and widespread geographic coverage (90 percent of the world’s population can obtain a signal), especially in developing countries, mobile phones are an excellent channel for obtaining crowdsourced land administration information. Frugal innovation is making them affordable for all, especially in developing countries where a new generation of information services in health and agriculture, for example, is turning the mobile phone into a global development tool. Mobile phones are progressively integrating satellite positioning, digital cameras and video capabilities. They provide citizens with the opportunity to directly participate in the full range of land administration processes from videoing property boundaries to secure payment of land administration fees using ‘mobile’ banking. But even today’s simpler phones offer opportunities to participate in crowdsourcing. A key challenge in this innovative approach is how to ensure authenticity of the crowdsourced land rights information. The paper explores applicability of the approaches adopted by wikis , e-commerce and other mobile information services and recommends the initial use of trusted intermediaries within communities, who have been trained and have worked with local land professionals. This approach has the potential to provide a good level of authenticity and trust in the crowdsourced information and would allow a significant network of ‘experts’ to be built across communities. To optimise the scarce resources, these intermediaries could be involved in a range of other information services, such as health, water management and agriculture. Crowdsourcing provides an opportunity for land professionals to forge a new relationship with citizens to jointly solve the global challenge of security of tenure. This citizen collaboration model encourages land professionals to rethink how land administration services are designed and delivered resulting in the more inclusive and 21st century aim of supporting land administration by the people, for the people. Geocollaboration - Mapping Bucharest Roads Condition - A VGI Approach (90) Lucian Zavate, Vlad Teodor, Cristian Vasile Driving in Bucharest these days means a lot of stress for your car due to road condition. Developing a mobile application which is able to measure and log car stress during daily drives, link the information with real world coordinate and sends measurements to a central geodatabase for analysis and publication might be a solution to facilitate municipality understanding the road condition. The application is free for use and allows anyone to use it in his/her car and measure the stress on the car. The user is able to upload the results to central geodatabase where data is analyzed using accelerometer, gyroscope data in order to produce and publish up to date maps which depict Bucharest road condition. These results can be used by the municipality, along with indoor data to determine where roads need to be fixed or to check road quality degradation in time. The project emphasized the power of volunteered geographic information in local communities and spatially enables citizens to collaborate and contribute in creating a spatial data infrastructure. This is just the entry point, the interface between the citizens and the local government spatial data infrastructure and it provides means to gather and disseminate geographic information in a very easy and intuitive way. “It is every man's obligation to put back into the world at least the equivalent of what he takes out of it." – and in this case giving is receiving. The mobile application is able to both send and receive geospatial information. Based on volunteered gathered information like road condition the municipality performs spatial analysis and schedules work roads. Using the same platform road condition data and programmed work roads are being sent to the user through the mobile application, allowing him to better plan his routes in order to avoid current road work or bad roads. The mobile application is based on Andoid platform and Esri’s ArcGIS for Android API and allows real time GPS differential corrections – EGNOS – for better positioning. Future plans include releasing the application for all mobile platforms such as Windows Phone, Windows Mobile and Apple iOS and releasing a professional version of the application which is linked to the car’s ODB-II interface for a better evaluation of real car stress. Parallel Session 6.5 (Room 2104B) Spatially Enabling Industry II An Extended Complex Event Processing Engine to Qualitatively Determine Spatiotemporal Patterns (201) Foued Barouni, Bernard Moulin [paper: refereed proceedings article] Spatiotemporal events are widely used in data acquisition systems and take an important space in spatiotemporal databases. When spatiotemporal events are combined together and respect a given structure, they form interesting situations called spatiotemporal patterns. Complex event processing has been used to represent such spatiotemporal patterns and detect them from the event cloud. In this paper, we propose an extension of a complex event processing engine for qualitative spatiotemporal patterns. Fuzzy spatial relations are used to express the spatial relationship between events’ spatial attributes and to improve expressiveness of patterns. The Performance of a Tight INS/GNSS/Photogrammetric Integration Scheme for Land Based MMS Applications in GNSS Denied Environments (185) Chien-Hsun Chu, Kai-Wei Chiang, Jiann-Yeou Rau, Yi-Hsin Tseng, Jia-Hsun Chen, Jie-Chung Chen The increasing demand for up-to-date 3-D geographic information systems (GIS) in planning, transportation, and utility management application poses significant challenges to the Geomatics community. The key idea is to improve spatial data correctness by frequent update interval with efficient acquisition tools without increasing costs but maintain sufficient accuracy. Therefore, the land based Mobile Mapping System (MMS) is designed for these challenges. The land based MMS composes of Positioning and Orientation System (POS), and imaging System. The first component is the integration of the Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) and the second component consists of digital camera or Lidar system. The land based MMS can survey the 3-D coordinates of objects in mapping frame when the data is taken without using any ground control point, which is known as direct referencing (DG). The most commercially available INS/GNSS integration strategy is known as the loosely coupled (LC) scheme in which the GNSS derived positions and velocities are integrated with the INS derived navigation information. The LC takes advantage of a simple and flexible architecture to derive navigation information, but the limitation is that the GNSS KF will not provide solutions as position and velocity updates for the INS KF if less than four satellites are tracked by the GNSS receiver. Another integration strategy is known as tightly coupled (TC) scheme, which processes GNSS raw measurements rather than the GNSS navigation information to execute measurement updates. It performs well even if less than four satellites are tracked. In the TC, there is only one KF, which processes the accelerations and angular rates from the inertial sensors for navigation. Additionally, the KF also processes the pseudo -range, pseudo-range rate and carrier phase measurements from the GNSS receiver. These measurements are used by the filter not only to estimate the navigation solutions, but also the inertial sensors correction parameters, which are used to compensate for the errors of accelerometers and gyros. Although the INS/GNSS integration system is able to perform seamlessly during GNSS outage, the accuracy degrades with time. In addition, frequent and long GNSS outages taking place in typical urban canyon degrade the accuracy of POS applied by the land based MMS thus deteriorate the accuracy of DG operation significantly. The overall objective of this paper is to provide a scheme that tightly integrates INS/GNSS and Photogrammetric for land based MMS applications with sufficient and stable POS solutions during GNSS outages. In the traditional photogrammetry operation, numerous ground control points are applied to compute those Exterior Orientation Parameters (EOPs) of cameras by bundle adjustment. The key opinion is to derive the INS center position and attitude and reconstruct 3-D tracking and 3-D object space by cameras EOPs. The proposed algorithm is verified using field test data collected in GNSS denied environments and the preliminary results presented in this study illustrated that the proposed algorithm is able to provide 60% improvement in terms of positioning and orientation accuracy in Taipei and Tainan cities. Developing and Testing a Real Time INS/GNSS Integrated System with the Aid of Automatic ZUPT and ZIHR (62) Cheng-Yueh Liu, Yu-Wen Huang, Kai-Wei Chiang Real time INS/GNSS integrated system has been one of the popular methodology in the research field of navigation technology. The Integrated INS/GNSS systems can overcome the shortcoming of stand-alone GNSS or INS so that can provide superior performance. The position and velocity information from GNSS is excellent external aid for updating the INS for improving its long-term accuracy. However, when INS is unaided by GNSS due to signal blockage , as for instance in buildings, tunnel or underground, the error of gyros grows with time since it is turn on by navigation system. Therefore, several motion constraint methods like zero velocity update (ZUPT) and zero integrated heading rate(ZIHR) have been proposed to improve the positioning and orientation accuracy during long GNSS outage. These constraints are implemented as the update source in the extended Kalman filter (EKF ) to estimate the inertial sensors correction parameters, and then use these correction parameters to compensate for the errors of accelerometers and gyros. In the case of post-process implementation, it is easy to identify the duration in which ZUPT and ZIHR can be applied by observing the whole collected data. Unfortunately , in real time scenario , it is complex to determine when and how long a system should apply ZUPT and ZIHR. This paper addresses this problem by providing a novel and robust mechanism of utilizing ZUPT and ZIHR constraints in developing real time INS/GNSS integrated system. The performance evaluation shows that there is 58% improvement in maximal error of position in comparison to the non-ZUPT and non-ZIHR system when a 60s GNSS outage is encountered . Consequently, the proposed scheme can provide the most consistent navigation solutions with sufficient sustainability in urban area. The Performance Evaluation of Low Cost MEMS IMU/GPS Integrated Positioning and Orientation Systems Using Novel DBPNNs Embedded Fusion Algorithms (67) Kuan-Yun Chen, Tsui-Ping Chen, Jhen-Kai Liao, Kai-Wei Chiang Mobile mapping systems (MMSs) have been widely applied for acquiring spatial information in applications such as spatial information systems and 3D city models. Nowadays the most common technologies used for positioning and orientation of a mobile mapping system include using Global Positioning System (GPS) as a major positioning sensor and Inertial Navigation System (INS) as the major orientation sensor. In the classical approach, the limitation of Kalman Filter (KF) and the price of overall multi-sensor systems have limited the popularization of most land-based mobile mapping applications. Although intelligent sensor positioning and orientation schemes have been proposed consisting of Multi-layer Feed-forward Neural Networks (MFNNs), one of the most famous Artificial Neural Networks (ANNs), and smoother, in order to enhance the performance of a low cost Micro Electro Mechanical Systems (MEMS) Inertial Measurement Unit (IMU) and GPS integrated system, the automation of the MFNN applied is not as easy as initially expected. Therefore, this study not only addresses the problems of insufficient automation in the conventional methodology that has been applied in MFNN-smoother algorithms for MEMS IMU/GPS integrated systems proposed in previous studies, but exploits and analyzes the idea of developing alternative intelligent sensor positioning and orientation schemes that integrate various sensors in more automatic ways. The proposed schemes are implemented using one of the most famous constructive neural networks; Dynamic Back Propagation Neural Networks (DBPNNs), to overcome the limitations of conventional techniques based on the smoother algorithms as well as previously developed MFNN-smoother schemes. The DBPNNs applied also have the advantage of a more flexible topology compared to the MFNNs. The preliminary results presented in this article illustrate the effectiveness of the proposed schemes over smoother algorithms as well as the MFNN-smoother schemes based on the experimental data utilized in this study. The Comparative Analysis of Non-Linear Filtering Technologies for Land-Based Mobile Mapping System Applications with Low-Cost MEMS Inertial Systems (23) Thanh-Trung Duong, Kai-Wei Chiang, Yun-Wen Huang, Hsuan Han Chen, Jhen-Kai Liao, Cheng An Lin In Mobile Mapping System (MMS), the integration of Inertial Navigation System and Global Positioning System (INS/GPS) is widely applied to determine the time-variable position and orientation parameters. Due to the low cost and small size, the Micro-Electro-Mechanical Inertial Measurement Unit (MEMS IMU) in INS/GPS integration is a trend for widely developing MMS in commercial applications. The linear estimation strategies, Kalman Filter (KF) or Extended Kalman Filter (EKF) are used as the optimal estimation tools for real-time INS/GPS integrated kinematic position and orientation determination. Optimal smoothing algorithms, also known as smoothers, have been applied for the purpose of accurate positioning and orientation parameter determination through post-processing for most of surveying and mobile mapping applications with integrated sensors. In contrast to the KF, the smoothing is implemented after all KF estimates have been solved by the use of past, present and future data. As verified by our previous researches, the magnitudes of positional and orientation errors during GPS outage can be improved significantly after applying one of these optimal smoothing algorithms. However, the magnitude of residual error also depends on the quality of the inertial sensors, the dynamics of vehicle and the length of GPS signal outage. Therefore, the reduction of remaining positional and orientation errors becomes critical when integrating a low cost MEMS IMU with GPS for land based mobile mapping applications. The Loosely Coupled (LC) system is commonly used as an INS/GPS integration approach due to its simplicity to derive navigation information. However, with low cost devices and simple integration strategy, the system performs poorly, particularly in case of initialization, uncertainty and long GPS signal outages. In order to improve the accuracy of the MMS, this paper proposes non-linear filtering technologies in which sigma points and particle filter based are developed, analyzed and improved to be applied in land-based MMS effectively. This research also proposes solutions to improve the processing time, one of the main disadvantages of non-linear estimation strategies. For the field test, a land based mobile mapping van is used for collecting data. A set of sensors including tactical grade IMUs, NovAtel SPAN-CPT and dual-frequency GPS receiver, NovAtel ProPak V3, are mounted to be the reference system. A low cost IMU, BEI C-MIGITS III, along with a dual-frequency GPS receiver serves as the testing sensors. The test data sets are collected under different environments such as open area and urban canyon. The acquired data is then processed by conventional and non-linear filtering algorithms in two modes: loosely coupled and tightly coupled. The processing result is then compared and analyzed in some scenarios for the overall aspect. The analyzed results show that the performance of non-linear estimation strategies is better than that of the conventional filtering such as KF or EKF and their smoothers, especially in case of uncertainty and long GPS signal outages. The improvement in positional accuracy is from 10% to 60% depending on multi-sensor integration architecture and operating condition. Parallel Session 6.6 (Room 205C) 3DGeoInfo: 3D Geometry and Topology Representing 3D topography with a star-based data structure Hugo Ledoux and Martijn Meijers For storing and modelling three-dimensional topographic objects (\eg buildings, roads, dykes and the terrain), tetrahedralisations have been proposed as an alternative to boundary representations. While in theory they have several advantages, current implementations are either not space efficient or do not store topological relationships (which makes spatial analysis and updating slow, or require the use of a costly 3D spatial index). We discuss in this paper an alternative data structure for storing tetrahedralisations in a DBMS. It is based on the idea of storing only the vertices and \emph{stars} of edges; triangles and tetrahedra are represented implicitly. As we demonstrate with one real-world example, our structure is around 20\% compacter than implemented alternatives, it permits us to store attributes for any primitives, and has the added benefit of being topological. The structure can be easily implemented in most DBMS (we describe our implementation in PostgreSQL) and we present some of the engineering choices we made for the implementation. Can Topological Pre-Culling of Faces Improve Rendering Performance of City Models in Google Earth? Claire Ellul 3D City Models are becoming more prevalent, and have many applications including city walk-throughs or fly-throughs to show what a new building would look like in situ, or whether a view or light will be blocked by a new structure, flood modeling, satellite and signal modeling. Often, these models are created using a process of extrusion of 2D topographic mapping, and can contain many thousands of polyhedra, which in turn results in performance issues when attempting to visualize such models in virtual earth applications such as Google Earth. This paper presents the results of a series of tests to determine whether using a topological approach to pre-cull hidden Faces from the model can bring about performance improvements. Such an approach could also be said to be one step towards the generalization of such models to support multiple levels of detail. Revealing the Benefits of 3D Topology on Under-Specified Geometries in Geomorphology Marc-Oliver Löwner The science of geomorphology is working on natural 3D landforms. This includes the change of landforms and the processes causing these changes, also. The main concepts of geomorphology, i.e. the sediment budget and the sediment cascade approach can definitely be supported by introducing 3D geometrical and topological specifications of the Open Geospatial Consortium. The ISO 19107, Spatial Schema, implements OGC’s Abstract Specification. It enables the modelling of real world 3D phenomenon to represent them as formal information models. Unfortunately, OGC’s concepts are not widely applied in the science of geomorphology. In this contribution we show the explicit benefit of 3D topology for the science of geomorphology. Analysing topological relationships of landforms directly can be related to geomorphic insights. This includes firstly, the process-related accessibility of landforms and therefore material properties, and secondly, the chronological order of landform creation. Further a simple approach is proposed to use the benefits of the abstract specification 3D topologic model, when only under-specified geometries are available. Often no sufficient data is available on natural landforms to create 3D solids. Following clear defined geometric conditions the introduced class _UG_Solid mediates between primitives of lower dimension a GM_Solid. Latter is the realisation of _UG_Solid that definitely holds the 3D geometry we need to associate with the 3D topological concepts. Geometric-Semantical Consistency Validation of CityGML Models Detlev Wagner, Mark Wewetzer, Jürgen Bogdahn, Md. Nazmul Alam, Margitta Pries and Volker Coors In many domains, data quality is recognized as a key factor for successful business and quality management is a mandatory process in the production chain. Automated domain-specific tools are widely used for validation of business-critical data. Although the workflow for 3D city models is well-established from data acqui-sition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Erroneous results and application defects are among the consequences of processing data with unclear specification. We show that this problem persists even if data are standard compliant and develop systematic rules for the validation of geometric-semantical consistency. A test implementation of the rule set and validation results of real-world city models are presented to demonstrate the potential of the approach. Parallel Session 6.7 (Room 2101) Quebec Government Perspectives Foncier Québec : innover pour la société québécoise Michel Morneau - Ministère des Ressources Naturelles et de la Faune - Foncier Québec Foncier Québec est un secteur du ministère des Ressources naturelles et de la Faune. Il a pour mandat de tenir les registres qui témoignent du morcellement privé et public de tout le territoire québécois et de rendre publics les droits fonciers qui s'y exercent. Au cours des dernières années, Foncier Québec a entrepris la réalisation de plusieurs projets de nature technologique, dont des volets touchent la géomatique. Ces projets visaient d’une part à accroître la productivité et l’efficience de l’organisation; d'autre part, ils voulaient répondre aux attentes de la clientèle par une offre accrue de ses produits et services en ligne. La conférence abordera d’abord trois projets réalisés : · le projet Cadastre 100 % numérique, qui a permis de simplifier les processus de travail et d’accroître la productivité par la modernisation de la mise à jour du cadastre; · le projet Service en ligne de réquisition d’inscription, qui offrira à la clientèle une interface conviviale pour lui permettre de préparer et de valider une demande d’inscription au Registre foncier directement à partir du Registre foncier du Québec en ligne; · le projet de numérisation des archives foncières, avec lequel sera numérisé et rendu accessible en ligne l’ensemble des documents qui ont été déposés dans ses archives officielles. La réalisation de ces projets jette ainsi les bases nécessaires à la mise en œuvre des projets de Foncier Québec pour les 5 prochaines années. Cela découle de réflexions et études réalisées au cours des dernières années, lesquelles s’inspirent des grandes tendances en matière de système de publicité foncière et de services publics. L’autre partie de la conférence viendra présenter les perspectives d’avenir, lesquelles se concrétiseront entre autres par une adaptation de produits et services de Foncier Québec aux besoins d’une clientèle en pleine évolution. Une approche innovante dans la gouvernance de l’information géographique au gouvernement du Québec Réjean Gagnon - Ministère des Ressources Naturelles et de la Faune L’information géographique est de plus en plus reconnue comme un élément essentiel au bon fonctionnement des organisations. Le gouvernement du Québec l’utilise amplement, à travers ses ministères et organismes (MO), pour la réalisation de ses nombreuses missions. Il est donc important que les données géographiques dont il fait usage dans ses processus décisionnels soient justes, fiables et précises. Pour répondre à cet impératif, le gouvernement du Québec a développé une approche innovante basée sur la coopération et les partenariats dans le domaine du géospatial. Il met en œuvre depuis 2008 une Approche de coopération en réseau pour l’information géographique, communément appelée « ACRIgéo ». Cette approche favorise l’implantation de nouvelles façons de faire s’appuyant sur la collaboration en réseau. Elle mise sur le développement de partenariats pour l’acquisition, la production, la mise à jour et la diffusion de données géographiques. Ce mode de fonctionnement implique la mise en commun de données géographiques partageables au bénéfice des 22 MO membres de la coopération ainsi que de leur réseau ACRIgéo (organisations pouvant contribuer à cette approche). Celle-ci implique aussi la mise en place d’outils et de services communs. Les partenariats réalisés au cours des dernières années concernent particulièrement l’acquisition de données géographiques (ex. : orthophotographies) et la production de bases de données d’intérêt communs et à valeur ajoutée (ex. : Adresses Québec, Réseau de transport terrestre du Québec, Réseau hydrographique du Québec). Par exemple, dans le cas de la géobase « Adresses Québec », l’implication de quatre MO ainsi que celles de collaborateurs du milieu municipal permet actuellement d’offrir aux clientèles des produits inédits. En plus de répondre à des besoins communs des MO, ces produits rencontrent des besoins d’autres organisations publiques ou privées leur permettant ainsi de développer des applications spécifiques à partir de cette géobase gouvernementale. L’ACRIgéo et les partenariats mis en place dans le domaine du géospatial contribuent avantageusement à l’enrichissement des connaissances territoriales. Ils favorisent aussi une plus grande synergie, notamment par le partage du savoir-faire, au sein de la communauté géomatique québécoise; tout cela dans une perspective d’amélioration des services aux citoyens. Pour un citoyen mieux informé en transports Serge Kéna-Cohen, Françoys Labonté - Fujitsu Innovation Centre Les systèmes de transport intelligents (STI), en anglais Intelligent Transportation Systems (ITS), désignent les applications des nouvelles <http://fr.wikipedia.org/wiki/Technologies_de_l%27information_et_de_la_communication>technologies de l'information et de la communication au domaine des transports. On les dit "Intelligents" parce que leur développement repose sur des fonctions généralement associées à l'intelligence : capacités sensorielles, mémoire, communication, traitement de l'information et comportement adaptatif. On trouve les STI dans plusieurs champs d'activité : dans l'optimisation de l'utilisation des infrastructures de transport, dans l'amélioration de la sécurité ainsi que dans le développement des services. L'utilisation des STI s'intègre aussi dans un contexte de développement durable : ces systèmes concourent à la maîtrise de la <http://fr.wikipedia.org/wiki/Mobilit%C3%A9>mobilité en favorisant entre autres le report de la voiture vers des modes plus respectueux de l'environnement. Dans la plupart des cas, les projets de STI sont la responsabilité d’organismes gouvernementaux (provinces, municipalités) ou paragouvernementaux (sociétés de transports, services d’urgences, etc.) qui démarrent des initiatives, souvent en silo, en appliquant les normes appropriées pour favoriser l’interopérabilité des données et services. Ces initiatives sont de longue durée et exigent de nombreuses ententes entre les organismes impliqués. Fujitsu et ses partenaires dans le présent projet pensent plutôt que les initiatives devraient être centrées sur l’individu et être démarrées avec une vision citoyenne qui servira de catalyseur aux organismes impliqués. La vision est que, à tout moment, le citoyen, individuel ou corporatif, connaîtra le moyen le plus efficace et le moment le plus propice pour se déplacer quel que soit le mode de transport adopté. Le projet développé dans le cadre du Centre d’innovation Fujitsu intègre des travaux faits par Fujitsu au Japon et en Europe et tire avantage de l’expertise en géospatial et en mobilité de Fujitsu Canada et des ses partenaires. Les outils seront développés en respectant les normes applicables; NTCIP (National Transportation Communications for ITS Protocol), ATIS (Advanced Traveler Information Systems) et TMDD (Traffic Management Data Dictionary). La conférence présentera des objectifs et enjeux du projet, les résultats obtenus ainsi que les étapes à venir qui permettront d’offrir aux citoyens une information pertinente en tout temps. Photogrammetric Point Clouds as a substitute for LiDAR in Forestry, Mining and Powerline Vegetation Management Tony St-Pierre - XEOS Imaging Over the last 3 years, XEOS Imaging has developed performing algorithms to produce very clean photogrammetric point clouds. These point clouds are now used as a substitute to LiDAR in many applications for forestry, mining and power line vegetation management. The result is a reduction of 75% of the budget for an equivalent LiDAR work and reduction of delays to acquire the data. The presentation will show some examples of real applications in each field of work. L'assurance de dommages et la géomatique Guillaume Rouleau - Industrial Alliance forthcoming Parallel Session 7.1 (Room 205A) Spatially Enabling Government VII Introduction to wet-areas mapping: creating an innovative base layer for rural, municipal and urban planning at high resolution (76) Jae Ogilvie, Paul Arp, Barry White Sustainability of hydrological resources, including sensitive aquatic habitats, is under significant risk due to unprecedented land use challenges. Innovative planning solutions that enhance the economic competitiveness of natural resource development industries, increase regulatory efficiencies and contribute to environmental stewardship are urgently needed by the government and industry across Canada. The Forest Watershed Research Centre at the University of New Brunswick has been working in close partnership with the many parties, including the Government of Alberta, since 2004 to test the effectiveness of a cartographic depth-to-water mapping tool. Maps predict the location of small water bodies such as ephemerals and wet areas which are currently not known to resource planners, but yet are sensitive to disturbance. Our modelling approach has been adopted by governments and industry in eastern Canada, Maine and Vermont, where it has been particularly helpful in enhancing the sustainability and stewardship of forested landscapes, and in reducing operational costs. Successful research trials in Alberta have moved this approach from the research phase to full implementation and efforts are currently underway to derive cartographic depth-to-water maps for approximately 20.5 million hectares of forested lands in the foothills, boreal and oil sands regions of Alberta based on high resolution LiDAR (Light Detection and Ranging) DTM data at 1 metre resolution. Although our mapping process is multi-scale and has successfully run on DEM datasets ranging from 90 metre to 1 metre in resolution, this presentation focuses on the technical aspects and challenges that arise from the creation of these maps based on discrete return LiDAR-derived bare earth DTM rasters. A brief overview of the cartographic depth-to-water mapping process will be described as well as many of its potential uses both within Alberta and across Canada. Current research, including the exploitation of point cloud LiDAR data for the extraction and potential classification of bogs and wetlands will also be discussed. High-resolution flow channels, wet area and cartographic depth-to-water modelling and mapping for arctic and subarctic areas in Canada (170) Mina Nasr, Mark Castonguay, Jae Ogilvie, Jagtar Bhatti A GIS‐based process was developed to determine flow channels, wet‐area regimes and cartographic depth‐to‐water (DTW) for about 1,000,000 ha sections (i) in the Fort Simpson area, (ii) east of the Great Bear Lake, and (iii) Bathurst Island in the Arctic. This process combines digital elevation modeling (DEM) with other surface information including satellite surface images to delineate and classify uplands and lowlands and related vegetation zonations. The process also assists in delineating watershed borders, the extent of wet area per watershed including sinks (collapse scars) and depressions, all at high geospatial resolution (at least 10 m, with 1 m resolution for LiDAR (Light Detection and Ranging) derived DEMs and images. The presentations illustrates the principles involved, and provides examples of how the resulting information can be used within management and planning contexts, with emphasis on the evaluation of hydrothermal risks as these would pertain to already existing or contemplated regional and local development plans. Examples deal with trail, road and pipeline layout, and related infrastructure requirements. The results also serve environmental, ecological and engineering research interests in terms of understanding hydrothermal processes and impacts on soil, vegetation and water at the local scale as they would be affected by land‐use interests and structures in particular, and climate change in general. The work also includes the hydrothermal modelling of thawing and freezing cycles, as affected by local site conditions as these vary from wetland to upland vegetation. This is done using daily weather records to determine the growth and abatement of permafrost depth, starting from a hypothetical no-permafrost condition 40 years ago. Semantic Interoperability between US-Canada Groundwater Sensor Data (172) Boyan Brodaric, Eric Boisvert, Nathaniel Booth Groundwater, like other natural systems, does not respect political boundaries, and freely flows across them in appropriate settings. Groundwater observations are perhaps the most important data needed to understand cross-border flow, but they are often hard to find and use because the number of providers is large and the data can be massive and very heterogeneous. Emerging geospatial standards for groundwater observation interoperability include in part the Sensor Observation Service (SOS), Web Feature Service (WFS), and the WaterML2 and GroundwaterML1 data transfer schemas, but these components have not previously been tested in cross-border groundwater scenarios. In this work, we report on an experiment to share groundwater level observations across the US-Canada border, using these components over large sensor networks with long monitoring periods and thus large data volumes. Schematic and semantic translation is implemented, for data structure and content respectively, to achieve interoperability between US and Canadian sensor networks. The results show that existing standards and technologies are effective at enabling near real-time cross-border interoperability between small to medium sensor networks and data volumes, but need upgrading for massive networks and data loads. We also report on progress to address these gaps within the standards community, and demonstrate an approach to deal with some of the outstanding issues. The experiment and subsequent activity represent a significant advance in the refinement of open geospatial water standards, in the implementation of associated technologies, and in the potential development of a North American groundwater sensor network. Remote sensing of suspended sediment concentrations in turbid rivers: A field survey (200) Jian-Jun Wang, Xi-Xi Lu, Yue Zhou, Soo-Chin Liew The construction projects on rivers such as dams often cause controversies because they may lead to negative environmental impacts possibly. In particular, if the dams are on international rivers, they will attract more attentions. One such example is the Chinese dams on the Upper Mekong River. In such situations, the sediment records like suspended sediment concentrations (SSC) are too sensitive to obtain readily. Moreover, the sediment records are scarce for most rivers in the world because SSC's conventional measurement methods are expensive and time-consuming. Therefore, this study aimed to investigate if SSC values could be estimated directly from remote sensing data that are increasingly available. Different from previous studies, this study focused on the highly turbid river waters. A field survey was carried out along two international rivers in Asia, i.e., the Upper Mekong River and the Upper Red River during 21 June-22 July 2007. Both spectral and hydrological measurements were done during the field trip. In total, 47 samples were measured at 47 sites, and nine of them that were measured on overcast days were not involved in analysis. The SSC of the remaining 38 samples ranged from 57.6 mg/l to 7468.1 mg/l. It was found that, in general, there existed a linear correlation between ln-transformed SSC and ln-transformed reflectance. The determination coefficient (R2) varied with increasing wavelength. R2 dropped from 0.51 to 0.08 from 400 nm to 550 nm, and then increased to 0.85 at 740 nm; after that, R2 increased slowly with wavelength (R2>0.90 when wavelength > 850nm). The spectral reflectance ratios were also analyzed. All the reflectance ratios between 900 nm and 800 nm, 700 nm, 600 nm and 500 nm, respectively (i.e., Rrs(900)/Rrs(800), Rrs(900)/Rrs(700), Rrs(900)/Rrs(600) and Rrs(900)/Rrs(500)) showed a strong relation with ln-transformed SSC in a linear regression (R2>0.90). Hence, two SSC indicators, namely, the reflectance at 900 nm and the reflectance ratio between 900 nm and 700 nm were selected to estimate SSC, respectively. The corresponding relative root mean square errors (RRMSE) are 47% and 43%, respectively. Therefore, this field survey showed that spectral data may provide a viable alternative option for retrieving SSC data from remote sensing data. This is crucial for studying the spatial and temporal variations of SSC of large turbid rivers resulting from climate change and human activities like dam construction and illegal sand extractions. Parallel Session 7.2 (Room 204AB) GEOIDE Contributions to environmental issues Oil in Canadian Waters: Bridging the gap between science and policy (177) Norma Serra-Sogas, Patrick O'Hara, Rosaline Canessa It is well documented that the marine ecosystem is dramatically affected by even the smallest amount of hydrocarbon in the water. Despite this knowledge, and the strict national and international legislation prohibiting the discharge of pollution into the marine environment, there is still a significant quantity of oil (average 1500 litres/year) observed in Canadian waters by aerial surveillance each year. Aerial pollution surveillance has been conducted visually in Canada since 1968, starting on the Great Lakes and then expanding nationally in 1991. In 2005, pollution surveillance in Canada was revolutionized with the introduction of remote sensing technologies, dramatically increasing the range as well as the detection and evidence gathering capabilities of the program. As of the summer 2009, the National Aerial Surveillance Program (NASP) has three resources outfitted with these powerful remote sensing suites, one for each Canadian ocean. These technological improvements allowed a substantial increase in the amount and quality of the data collected. However, the original state of the data limits the usability in terms of informing decision making and enhancing performance management. The Oil in Canadian Waters project emerged from the need to develop a framework and methodologies for processing the immense amount of data gathered and assess the scope and dynamics of oil pollution in Canadian waters. This paper presents the main findings of the Oil in Canadian Waters project, including the developed database schema to house NASP information, trends in oil spill surveillance coverage, and the risk model used to identify oil pollution hotspots in Canadian waters. In addition, we hope to present the results of an upcoming workshop aimed to make our research more accessible to Transport Canada and other government agencies (federal and provincial), and to identify current gaps and priority areas in maritime oil spill risk to pursue new research opportunities. A scenario-planning model to forecast the impact of different degrees of land-use intensification on Albertan woodland caribou (149) Christina Semeniuk, David Birkigt, Marco Musiani, Mark Hebblewhite, Scott Grindal, Danielle Marceau This project was completed under a GEOIDE Short Strategic Investment Initiative, working with industry partner ConocoPhillips Canada. Woodland caribou (Rangifer tarandus) are classified as threatened in Canada and Alberta, and a local population in the province’s Foothills Region, the Little Smoky herd, is at immediate risk of extirpation due, in part, to anthropogenic activities such as oil and gas (upstream industry), and forestry that have altered the ecosystem dynamics. While much is known about caribou ecology, the behavioural mechanisms by which resource-extraction industries contribute to caribou population decline are less clear. To address this issue, we have developed a spatially explicit, agent-based model (ABM) to simulate caribou movement behavior in the Little Smoky (LSM) on a static winter landscape to gain insight into: (1) the mechanisms caribou employ to select and use their habitat, (2) the relative extent to which they perceive industrial activities as predation risk, and (3) the energetic expenditures associated with caribou behavioural strategies. A set of environmental data layers was used to develop a virtual grid representing the landscape over which caribou move. This grid contains forage-availability, energy-content, and predation-risk values. The model was calibrated with caribou bioenergetic values from literature sources, and validated using GPS data from thirteen caribou radio-collars deployed over six months from 2004 to 2005, representing caribou winter activities. Our model findings suggest that female caribou trade off the competing goals of obtaining their minimum daily energy requirements against conserving energy for reproduction, while minimizing their predation risk and exposure to disturbance. As the LSM is undergoing continuous land use development, it is critical to explore how changes in the landscape due to future industrial intensification as well as possible mitigation measures might affect woodland caribou habitat and its use. To enable a dynamic representation of the environment, we are developing a cellular automaton (CA) to simulate various future patterns of land-use change over a 20-year time frame at a time step of four years. Five scenarios are simulated: development at the status quo level, increased and decreased forestry activity, and increased and decreased upstream activity. Next, we shall be integrating the caribou ABM with the future landscape scenarios developed by the CA to explore how caribou respond both spatiotemporally and energetically to the various development scenarios. We will be presenting results from our ABM/CA model demonstrating which landscape changes have the greatest impact on caribou fitness and subsequent population persistence. Our research will benefit our industrial partner, ConocoPhillips, as well as others in the energy and forestry sectors in providing applied, science-based decision tools for managing potential effects of resource extraction activities on valued resources such as caribou. Understanding the relative contribution of major industrial landscape-level disturbances to caribou population declines will assist industry with their sustainable development goals, as our ABM/CA will be used as a tool in conservation planning. Monitoring agricultural land management practices using Synthetic Aperture RADAR (175) Aaron Berg, Justin Adams, Steve McKeown, Tracy Rowlandson Agricultural land management practices can have a significant influence on water quality. Practices such as conventional tillage expose rough soil surfaces, increasing soil erosion potential and nutrients entering waterways. Conservation measures such as reduced and no till methods have benefits which include significant reductions in water erosion, mitigating nutrients and pesticide movement into waterways. Government agencies are interested in assessing the adoption rate of these practices and identifying areas susceptible to erosion and waterway contamination for improved targeting of best management practices. Remote sensing offers an ideal platform for observing agricultural land cover and management practices. The objective of this research is to determine the sensitivity and applicability of quad-polarized RADARSAT-2 data for identifying changes in land cover and management practices. During the fall of 2010, land-use management practices were recorded over more than 100 agricultural fields coincident with several RADARSAT-2 overpasses. Our analysis demonstrates the sensitivity of several polarimetric variables (e.g. pedestal height, circular polarization, phase difference, cross-correlation) appropriate for detection of harvest-state and tillage type mapping. The final attribution of post-harvest tillage practices is integrated into an agricultural field polygon and data base framework operated by the Ontario Ministry of Agriculture and Rural Affairs. Simulation of forestry and upstream oil/gas development in the Little Smoky region of Alberta using Cellular Automata (151) David Birkigt, Christina Semeniuk, Marco Musiani, Greg McDermid, Mark Hebblewhite, Scott Grindal, Danielle Marceau This project was completed under a GEOIDE Short Strategic Investment Initiative, working with industry partner ConocoPhillips. Over the past decade, the Little Smoky region of west-central Alberta has undergone rapid industrial development through the exploration and recovery of oil and natural gas (upstream industry) as well as by the forestry industry. This industrial development, including its supporting and residual infrastructure of well sites, roads, pipelines and cut blocks has greatly altered the once characteristic boreal forest. It has led to concerns for the Little Smoky woodland caribou (Rangifer tarandus caribou) herd (LSM), a threatened species that is highly sensitive to habitat disturbance. To gain insight into the persistence of LSM, an understanding of the future status of the landscape must be achieved. A cellular automata (CA) model was developed to simulate possible development scenarios of forestry and upstream industry. The objective is to understand how industrial features influence the fitness of LSM during the winter -their most sensitive period-, allowing the projected landscapes to be later integrated in an agent-based model of LSM. The approach taken in designing the CA is novel in several aspects, including the use of an interactive method to create the transition rules, the use of a spatial autocorrelation index for the identification of neighbourhood sizes by assessing the distance and strength of spatial dependence between calibration data and potential driving factors, the use of multiple neighbourhood sizes formed by concentric rings, and the overall approach of simulating the evolution of industrial features in the landscape as opposed to the whole landscape. The CA was calibrated using classified Landsat TM5 imagery and a set of raster driving factors derived from vector layers of oil/gas wells, infrastructure and cut blocks. These datasets were acquired for four years from 1998 to 2007 and at the validation year of 2011. Simulations were performed at the resolution of the smallest industrial feature of interest (well sites: 125m), over a period of 20 years, at a time step of 4 years. Five scenarios were simulated: development at the status quo level, increased and decreased forestry activity, and increased and decreased upstream activity. The approach taken for simulating industrial features as opposed to simulating the landscape in its entirety allows the output of the simulations to be combined in any manner for later use in the LSM agent-based model. Use of RADARSAT-2 polarimetric SAR and optical images for mapping surficial materials in Canadian Arctic (119) Brigitte Leblon, Armand LaRocque, Matt Ladd, Mike Sawada, Jeff Harris, Joseph Chamberland, Garrett Parsons The Canadian Arctic is currently the focus of increased geomatics activities which aim to provide better geoscience knowledge to inform decisions related to resource development. Such knowledge may be used by Canadian private industry to make decisions on exploration and development, or by governments and land owners regarding development and non-development options. Relatively good surface expressions of bedrock structure and lithology are found in few areas. However, most of the surface materials are sand and gravel, boulders, till, and organic deposits. The study, funded by the Geological Survey of Canada and the GEOIDE-SSII program, presents a method to map surficial materials by optimizing the combined use of RADARSAT-2 multi-beam polarimetric SAR and fused optical images acquired over a study area in Nunavut. SAR has a unique ability to detect surface texture and provide information on scattering mechanisms that are related to surface roughness and moisture content. Optical images allow acquiring information on the surficial reflective properties that are related to the presence or absence of vegetation and to the surface moisture content. Optical imagery was collected from SPOT-5 and LANDSAT-5 & LANDSAT-7 and fused prior to classification in order to maximize cloud-free area. We provide an overview of the data fusion and sharpening process that was used to develop the optical mosaics prior to classification. RADARSAT-2 polarimetric SAR images were acquired using various incidence angles and looking directions to use variable viewing geometries in the mapping method. They were used to compute several polarimetric images. The first type is directly derived from the images, such as elements of the covariance, coherency and circular coherency matrices, total power image, pedestal height, radar vegetation index, fractional polarization, co- and cross-polarized ratios, correlation coefficients (including the RR-LL one) and polarization phase differences. The second type of variables is derived using the following target decomposition techniques: Cloude-Pottier, Freeman-Durden and Kroger. Representative training areas of distinct surficial deposits (bedrock, boulders, organic deposits, sand and gravel, thick till with dense vegetation, thick till with sparse vegetation, and thin till) were identified from field information and by interpreting panchromatic aerial photographs and LANDSAT-7 ETM+ images. The RADARSAT-2 polarimetric SAR and fused optical images were classified together using a supervised classifier that combines a genetic algorithm with a neural network approach. Parallel Session 7.3 (Room 2103) Experiences & Case Studies IV 3D Land and Property Information System: A Multi-level Infrastructure for Sustainable Urbanisation and a Spatially Enabled Society (116) Serene Ho, Abbas Rajabifard [paper: refereed book chapter] Urbanisation is an inevitable part of the economic development process for any country and is considered a global phenomenon (World Bank, 2009). Currently, 50 percent of the world’s population resides in urban areas; by 2050, this ratio will reach 70 percent. This concentration of growth will place increasing pressure on land resources that are already in high demand. The achievement of sustainable development goals is therefore predicated on achieving sustainable urbanisation. This paper considers the specific challenges of urbanisation on land and property and the development of a three-dimensional (3D) land and property information system as a new tool for managing rights, restrictions and responsibilities. This system aims to provide a multi-level infrastructure to link government, industry and citizens. By facilitating access, discovery, and sharing of land and property information, this system will support the processes and broad governance objectives of modern land administration systems and provide the foundation for realising a spatially enabled society. Semantic web based system supporting preparation, publication and use of legal acts on spatial planning (72) Tomaszx Kubik The infrastructure for spatial information is aimed at supporting environmental policies and policies or activities which may have a direct or indirect impact on the environment. One of the crucial use cases carried out at different administrative or governmental levels (local, regional, national), but implementation of which is still on the road, is spatial planning. “Spatial planning involves the methods used by the public sector to influence the distribution of people and activities in spaces at various scales as well as the location of the various infrastructures, recreation and nature areas.” Spatial planning and maintenance at local level belongs to the responsibilities of local government units. It is based on legal acts and supporting materials establishing the rules for the development, constructions etc. supported with the accompanying map with their spatial extent. Thus, Polish municipalities are responsible for adoption of: study of the conditions and directions of the spatial management of a commune, local spatial management plan, development strategy and other documents. The local spatial management plan determines: the use of the land, the distribution of investments of public purpose and the conditions of development and spatial management. Although all these documents have a close relationship with the space, their contents mainly have a form of textual records. The article contains the description of the prototype of the system designed for the management of heterogeneous and distributed Internet resources with the aid of semantic technologies, adjusted for spatial planning domain. The system was built up from components: knowledge base with SPARQL (service) and SAIL (API) interfaces, relational database, web application with GUI (including: semantic search forms, semantic input forms, editor with semantic tagging ability). The system’s knowledge base is used to store the triples describing spatial planning documents and their content. Thanks to the especially designed ontology various resources might be associated in order to find related information on conditions posed for the spatial management. Thus, each legal act adopted and published by the municipality can be indexed semantically, and linked with other resources available on the Internet. The users of the system would benefit from it while drawing-up of a spatial plan and other processes. With the aid of the implemented search facilities they would build consistent regulations, having at the finger tip all the knowledge related to the existing acts and documents. Such a system could be incorporated in the infrastructure for spatial information, enriching its functions and extending its use cases beyond current limits. Use of ICT (GIS and SDI) in promoting coffee quality in Maraba Sector in South Province of Rwanda (33) Jean Pierre Hitimana Like other countries in the world Rwanda is facing the challenge of Climate change, environment degradation etc…. In order to preserve its nature resources, its good quality coffee, decision makers in Rwanda need to promote the GIS (Geographic Information Systems ) and SDI (Spatial Data Infrastructure). According to Nebert (2004) … business development, flood mitigation, environmental restoration, community land use assessments and disaster recovery are just a few examples of areas in which decision-makers are benefiting from geographic information, together with the associated infrastructures (i.e. Spatial Data Infrastructure or SDI) that support information discovery, access, and use of this information in the decision-making process. This project focus on working with the local communities in Maraba sector in south province of Rwanda , researchers, decision makers and students from National University of Rwanda (NUR) in fighting against land degradation and promote environmental restoration and preserve the good quality of the Rwandan coffee. The Coffee in Maraba sector in south province of Rwanda has received many awards because of its good quality. The findings in this research about Zones of coffee plantation and relation relationship to coffee quality will be published on Geo-Portal where maps and metadata created or collected will be available to the public and particularly to Maraba sector community. The results of this research will be presented to Maraba sector community in a workshop so that they can gain knowledge of the land and the good quality of Maraba coffee. Using a Data Model to Integrate Kenya's Orphan and Vulnerable Children Programs (165) John Spencer, Charles Pill There is growing recognition in international public health that vertically structured programs may increase inefficiencies, potential gaps in coverage and may not lead to better social and health outcomes. Data constraints and challenges around Kenya’s orphan and vulnerable children (OVC) programming were identified, and a data model concept was developed. The model is designed to improve and create a strengthened common platform for more effective planning, program decision making, monitoring and evaluation. The Government of Kenya oversees a cash transfer program for households that provide support to orphans. The United States Agency for International Development, UNICEF and other public and private organizations support OVC programs in the country as well. Data from each program resides in separate places and do not have comment indicators or terms. Programs supported by these three stakeholders reach a substantial portion of the country’s OVC population. Understanding the overlap and complement between these programs is difficult because of technical and non-technical challenges to link the reporting data. Linking data requires addressing these issues, including: variability in how data are recorded in the various data sets (eg. spelling of district names); file formats; and, variable definitions. The authors propose a data schema and steps to address the organizational barriers to integrate these key Kenya OVC programs, and to improve the effective use of OVC program data to support decision making. A case study was prepared using data from all three programs based in the same district. Combining geography with program data, can help to define overlaps, and opportunities for complement amongst the various programs. Differ from Evaluated Highway Environments by Using a 3D Map Embedded INS/GPS Fusion Algorithm for Seamless Vehicular Navigation (66) Yi-Hsuan Lee, Cheng-Yueh Liu, Kai-Wei Chiang Recently, GPS play an important role in the land-vehicle navigation application, and the information provided by GPS is said to be the continuous information of locations which can be done by several kinds of compute fashion conveniently. Whereas the GPS data perform well in the open-sky condition, the urban areas with tall buildings and viaducts can’t take the challenge of the signal blockage that turn into a poor GPS positioning solution. The requirement of additional facilities comes to the integrated navigation system of Inertial Navigation system (INS) that surrounding has no influence with providing relative positioning information uninterruptedly, even when GPS signal is unavailable. But the integrated one still has the problem of time length during the unavailability of GPS signals. Also, the grades of INSs, such as micro-electro-mechanical (MEMS) inertial sensors, tactical-grade inertial sensors and above have significant difference in system budget. However, the positioning accuracy of those low cost inertial sensors degrades rapidly with time when GPS signals are interrupted by the surroundings of high-rise buildings, tunnels and overpasses. Due to the coverage of integrated system, the Map Matching (MM) technology was widely used in the car navigation system. By combining the database of map information, the navigation solution can easily be fixed on the road during the real time navigation. However, the horizontal 2D map matching algorithms booming these days are unable to deal with the cases of viaducts in many burgeoning urban areas that focusing on the transport development such as elevated highway, Mass Rapid Transit (MRT) system or other highway needed transportation systems. The height components and attitude information obtained from INS become the essential components for ongoing 3D GIS based land vehicular system. In this study, a 3D Map Matching (MM) algorithm is embedded to current INS/GPS fusion algorithm for enhancing the sustainability and accuracy of INS/GPS integration systems. To validate the performance of proposed 3D Map embedded INS/GPS integration algorithms, a field test was conducted in the downtown area of Kaohsiung, the second largest city of Taiwan. In this test area, four scenarios were considered, paths under the freeways and streets between tall buildings, where the GPS signal is obstacle or interfered easily. The test platform was mounted on the top of a land vehicle. The IMUs applied includes SPAN-CPT (1 deg/hr in run gyro bias) from NovAtel, which was used as the reference system, and the MEMS grade IMUs for comparison of testing both come from BEI Systron Donner. The preliminary results indicate the proposed algorithms of 3D Map matching are able to improve the accuracy of positional components in GPS denied environments significantly with the use of two IMU/GPS integrated systems either in DGPS mode and SPP mode, respectively. Consequently, the modified loosely coupled INS/GPS integration scheme with map derived positions can provide the most consistent navigation solutions with sufficient sustainability. Parallel Session 7.4 (Room 2101) Spatially Enabling Citizens II Spatially Enablement of Catchment Communities through Spatial Knowledge and Information Network Development (115) Dev Raj Paudyal, Kevin McDougall, Armando Apan [paper: refereed book chapter] A spatially enabled society (SES) is an emerging concept to make spatial information accessible and available for the benefit of society. It is a concept where location, place and other spatial information are available to government, community and citizens. This is an important extension to the generational development and progression of Spatial Data Infrastructure (SDI) as it seeks to contribute to wider societal benefits and sustainable development objectives. This research paper investigates the social dimension of SDI and the theoretical foundation for spatially enablement of catchment communities. Two social science theories, namely, actor network theory (ANT) and social network theory are utilised to better understand the relationships in spatial information sharing and knowledge sharing across catchments. A network perspective of SDI was explored through a case study of the Queensland Knowledge and Information Network (KIN) project. Spatial information sharing processes among regional Natural Resource Management (NRM) bodies were analysed using an object oriented modelling technique to assess the impact on catchment management outcomes. The relationships among the knowledge network stakeholders and the influence of these relationships to spatial information and knowledge sharing was analysed using social network analysis. The findings from this study suggest that a network perspective of SDI assists in understanding the spatial information management issues of catchment management and the broader goal of spatially enablement of society (SES). Are ‘Smart Cities’ Smart Enough? (182) Stéphane Roche, Nashid Nabian, Kristian Kloeckl, Carlo Ratti In our contemporary societal context, reconfigured by wide spread impact of geolocalization and wikification of urban population everyday work and life, two related concepts, the “spatially-enabled society” and the “smart city” have emerged from two different but quite related fields: Global Spatial Data Infrastructure community drives the former while practitioners and researchers in urban planning, urban studies and urban design are more concerned with the latter. We believe that technologically enhanced, ICT-driven solutions that spatially enable the members of urban population, contribute to smart operation of the cities and for that matter we suggest that a dialogue between the communities that foster these two notions needs to be established. We try to provide an ontology of categorically different but still related spatial enablement scenarios along with speculations on how each category can enhance the Smart City agenda by empowering the urban population, using recent projects by MIT SENSEable City Lab to illustrate our points. Integrating Sensors on a Smartphone to Generate Texture Images of 3D Photo-realistic Building Models (136) Sendo Wang A 3D photo-realistic building model which can represent the real appearance of the building should be composed by precise geometric model and realistic façade images. There are a number of approaches to reconstructing the geometric model from photogrammetric images and LiDAR points cloud. However, the façade texture generation still relies on massive labor works, therefore, are still the bottle neck in the photo-realistic building modeling process. The emphasis of this paper will be put on the integration of sensors on a smartphone. While the camera is collecting the realistic photo of the building façade, the built-in GPS receiver and G-sensors are recording the approximate 3D coordinates and 3 rotation angles of the exposure station. However, these data are not accurate enough for the texture image generation. A semi-automated approach is proposed to improve the accuracy of the position and orientation data and to generate the precise façade texture images. The reconstruction of 3D realistic building models consists of three major issues: (1) geometrically modelling the object; (2) determining the image orientation; (3) generating the realistic façade texture from photographs. By introducing the “Floating Model” concept, the object modelling and image orientation problems can be solved efficiently through the semi-automated procedures based on the Least-squares Model-image fitting (LSMIF). A friendly human-machine inter-acting interface is designed for an operator to choose suitable model. Then the operator can move, rotate, or resize the model to approximately fit among all of the images. An ad-hoc LSMIF algorithm is developed to solve the optimal fitting between projected model line segments and extracted edge pixels. Since the object model can be extracted and the photo orientation can be determined, the creation of realistic texture image, which is also called inverse mapping, can be automated by coordinate transforming and image resampling. For better understanding the camera characteristics on a smartphone, a series of camera calibration process has been executed to derive interior orientation parameters before taking façade pictures. Three representative buildings in NTNU campus are selected for the experimental tests. The geometric models are reconstructed by fitting floating models to aerial photogrammetric images. The façade photos are taken by smart phones on the ground. The result shows that the proposed approach is practicable, but the lens distortion must be corrected before creating texture image. Since the iterative LSMIF algorithm requires initial parameters, the position and pose derived from built-in sensors must fall in the pull-in range. Increasing Usability of WiFi-based Positioning System: Real-time (Turn-by-Turn) Navigation Support (154) Wook Rak Jung, Scott Bell WiFi-based Positioning Systems (WPS) can provide indoor positioning as a complement to Global Positioning System (GPS); as a result, Location Based Services (LBS), which are supported by ubiquitous location finding and positioning information, can be made available for many indoor spaces. However, most WPSs only provide 2-Dimensional positioning information and support real-time navigation (like that of GPS-based car navigation) in only limited ways. The University of Saskatchewan Enhanced Positioning System (SaskEPS) successfully produces very reliable 2.5-Dimensional positioning information at randomly selected fixed locations at the University of Saskatchewan. Furthermore, SaskEPS produced GPS-like positioning accuracy (sub 10 metre error) during testing. However, such limited testing (static locations) requires testing and experimentation under different conditions to establish the nominal characteristics and determines optimal conditions for both positioning and navigation services. While we have explored several challenges of indoor localization, Access Point (AP) density appears to have the greatest single impact on 2-D accuracy. When established in high AP density environments the impact of other sources of error (IEEE protocol, arrangement, etc.) is reduced. SaskEPS can extend Ubiquitous Positioning Service with sub 10m accuracy in indoor environments that have high AP density. The purpose of this study is to validate the usefulness of SaskEPS as a real-time navigation system for indoor environments. This research has been implemented in several campus buildings at the University of Saskatchewan campus and explores the systems capacity for positioning during dynamic testing (movement). The result of the indoor navigation tracking experiments with SaskEPS shows that SaskEPS can extent to turn-by-turn navigation system into indoor environments in conjunction with GPS. Parallel Session 7.5 (Room 2104A) Applications and Case Studies from Developing Nations River Innundation and Hazard Mapping - A Case Study of Susan River in Kumasi (168) Collins Fosu, Eric Forkuo, and Asare Mensah, KN University of Science and Technology, Ghana In recent times there have been extreme climatic conditions due to climate change. As a result of this, the intensity of rainfall has increased tremendously causing floods in many areas and countries worldwide. It is therefore prudent that such a natural hazard is addressed and managed in a way to reduce the impact it cause on people and the environment. To achieve the aim of river modeling and hazard mapping using Geographic Information System, spatial technology and HEC-RAS hydraulic model were used as tools. In this research, DEM which is basic input for any effective flood modeling was created from contour data. The geometric data needed for the modeling process were extracted from the DEM, topographic map and field measurements. A remotely sensed image was classified into various land cover types and was used for estimating the roughness coefficient of the various cover types during the modeling process. The model results were displayed and analyzed in ESRI ArcGIS environment. The flooded area was geometrically overlaid on the topographic map to delineate the affected buildings. The hazard map produced clearly shows the spatial distribution of the flooded area which is located at areas with relatively low relief. The total flooded area covers an approximately 2.93km2. Also a flood depth of 4.01637 was obtained as the maximum water level. Generally, high water depth occurred along the main channel and spreads gradually to the floodplains. Risk Management In Ukraine On The Basis Of The Functioning Of Spatial Data Infrastructure (82) Victor Putrenko The result of the transformation of Ukraine economy has been accumulated on its territory a lot of outdated production facilities and hazardous waste, which increase the risk of emergencies. One in four Ukrainian living in areas of possible chemical contamination from potentially dangerous objects. In Ukraine 1211 industrial facilities are operating, which are stored or used in the production of more than 805 thousand tons of hazardous chemicals. In this regard accident prevention and monitoring of natural and manmade disasters are essential elements of national security. Ukraine has had a government information-analytical system of the Ministry of Emergency Situations since 2006, which has taken place automated monitoring of emergency situations. The system forms a spatial database of emergency situations and provides remote data access based on technology ESRI. A promising direction of development of this system is its expansion through the use of databases of the risks of natural and man-made accidents, their mapping and analysis. This requires the use of basic and specialized data sets of national and regional spatial data infrastructures, branch databases, methods of processing, and visualization. The article covers the basics of interaction data SDI and emergency warning system. The main groups of data necessary to analyze the risk of SDI include digital elevation models, topographic data sets, registers of industrial enterprises, the database of potentially dangerous facilities and waste storage, data about the natural negative phenomenons. Integrating SDI and government’s system to avoid differences in the spatial data and reduce maintenance costs and reliability of spatial information. Discussed in more detail the organization of data on the sample database of potentially dangerous objects. On the example of the Vinnytsia region were prepared spatial databases of industrial infrastructure, which quantitatively and qualitatively assesses the risk of an emergency on-site. The objects that have a high risk of emergency situations include facilities of public utilities services, production and storage of hazardous chemicals, flammable and explosive objects. The main qualitative characteristics of objects is their substantive classification and overall assessment of the condition. Quantitative characteristics of the objects is their power rating, population and territory, which is serviced, the quantity of stored hazardous substances. In the analysis and mapping of objects was used State classifier of emergencies, which allows you to build complex thematic maps based on the extraction conditions of origin of an emergency. The important thing is a topological space and network analysis, which allow the assessment of reliability of all system elements in the complex. This solves problems of inventory, modeling and prediction of possible technogenic emergencies. The results obtained constitute a block to support decision making in the management of emergencies and require maintenance of metadata. An important development emergency warning system is its integration with basic services and provision of separate SDI access to information for professionals and the public. In connection with this solved the problem of constructing the chain of management at the national level, which includes the assessment and prediction of risks, collect and process intelligence, coordination of efforts in disaster management and alert the population about the danger. The GRDCentre Clearing House: Progress, Challenges and Future Outlook (181) Anthonia Ijeoma Onyeahialam Geographic data is an important element for sustainable development and a knowledge-based economy as evidenced in developed nations where data required for decision making is available and accessible. This is made possible by infrastructures that coordinate spatial data and provide remote access information on data owners and providers. With such infrastructure lacking in Nigeria, GRDCentre embarked on the development of a clearinghouse that freely hosts metadata and research data on Nigeria to support public access to data. The aim is to support public access to data sources and also support members of the academic community with data for academic purposes. The organisation sources Nigerian metadata from three sources: online, the private sector and the academic community. In Nigeria the private sector and academic community represent a large percentage of the GIS data providers and users yet they have limited recognition. The presentation will focus on the progress made in its development using Geonetwork open source software, capacity building efforts, support for educational institutions, private sector for its use and its proposed future. It also highlights opportunities and challenges of operating the clearinghouse initiative on a low cost budget in Nigeria. Coastal Geomorphology and Landuse Changes Along Coastal parts of Goa, India: An RS-GIS Approach (106) Sudheshna Samantha, Mahender Kotha, Pravin Kunte Goa, endowed with natural & scenic beauty, is famous for its silvery sand & golden coastline. In the recent years a lot of changes (both natural & man-made) have occurred rapidly which have direct impact on the human environment. For better management, these changes in the nature have to be delineated for better understanding and for taking the necessary mitigate or remedial measures. Further, the management of natural resources has become a complex task as more & more socio-economic activities such as urban development, agriculture, waste disposal, nature conservation, shipping, harbor development, fisheries etc., takes place. Present paper discusses the geomorphology of coastal features as observed and maps the laduse changes that have occurred during the period from January 1999 and March 2001 with the rapid pace of urbanization on the basis of field observation and with the use of IRS-1C Satellite Imageries using RS & GIS methodologies. In all about fourteen (14) coastal features have been observed from various band combinations and their chacteristics have been delineated. From the study it is observed that there is an overall increase in the barren area within the mining belt in March 2001 and an overall decrease in the thick vegetation (category) shown by the vegetation index images; and highly turbid water in the northern region indicating active sedimentation along the coast moving in the southeasterly direction. GIS Application for Local Government Revenue Mobilization (184) Collins Fosu and George Ashiagbor, Dept. of Geomatic Engineering, KNUST, Ghana As part of decentralization reforms, many countries have devolved revenue and expenditure responsibilities to Local Government Authorities (LGAs). LGAs therefore, face the challenge of mobilizing appropriate level of revenue to enable effective service and infrastructure provision. To ensure effective execution of these statutory functions, the Local Government Authorities (LGAs) all over the world need to improve their Internal Revenue Mobilization. This paper describes comprehensively the functionalities of a GIS application, Local Government Revenue Mobilization System (LGRMS), developed for local authorities in Ghana for internal revenue mobilization. It gives detailed information of the developed functionalities of the application and the dependencies on GIS for effective local government revenue planning and mobilization. The paper clearly shows that an integrated GIS-Database technology tool is capable of providing a more efficient collection, tracking and management of Local Government revenue and other municipal fees. The development of the revenue system is implemented by the use of the system development life cycle. The system provides realistic information on the revenue potential of an assembly and automates the revenue mobilization processes. It has the ability to integrate and analyze a wide variety of information based on their spatial locations. It also supports a full range of business process on revenue mobilization ranging from, billing, license applications and renewals, permits issuance etc. and the tracking of the same. The menu driven GUI developed is user friendly and incorporates various spatial utility maps including education and health facilities and road network which will increase its acceptability and utilization among planners and decision-makers and is expected to increase the efficacy of revenue planning and budgeting. Parallel Session 7.6 (Room 205C) 3DGeoInfo: Open Source Development Keynote Speaker Philippe Cantin Advancing DB4GeO Martin Breunig, Edgar Butwilowski, Daria Golovko, Paul Vincent Kuper, Mathias Menninghaus and Andreas Thomsen The analysis of complex 3D data is a central task for many problems in the geo- and engineering sciences. Examples are the analysis of natural events such as mass movements and volcano eruptions as well as 3D city planning and the computation of 3D models from point cloud data generated by terrestrial laser scanning for 3D data analysis in various domains. The volume of these data is growing from year to year. However, there is no geo-database management system on the market yet that efficiently supports complex 3D mass data, although prototypical 3D geo-database management systems are ready to support such challenging 3D applications. In this contribution we describe how we reply to these requirements advancing DB4GeO, our 3D/4D geo-database architecture. The system architecture and support for geometric, topological and temporal data are presented in detail. Besides the new spatio-temporal object model, we introduce new ideas and implementations of DB4GeO such as the support of GML data and the new WebGL 3D interface. The latter enables the direct visualization of 3D database query results by a standard web browser without installing additional software. Examples for 3D database queries and their visualizations with the new WebGL interface are demonstrated. Finally, we give an outlook on our future work. Further extensions of DB4GeO and the support for the data management for collaborative subway track planning are discussed. Glob3 Mobile: An Open Source framework for Designing Virtual Globes on iOS and Android Mobile Devices Jose Pablo Suárez, Agustín Trujillo, Manuel de La Calle, Diego Gómez, Alfonso Pedriza and José Miguel Santana With the widely development of mobile devices, 3D graphics have been in high demand as it has also become a very important requirement of modern applications. Virtual Globes integrating environmental data at any time or place, remains a challenge within the technical constraints imposed by mobile devices. We present Glob3 Mobile, an open source framework for Virtual Globe development for familiar iOS and Android mobile devices. The paper discusses the design and development choices for each platform. The aim of this work is twofold. First, to provide an efficient Virtual Globe application, testable and freely accessible from the web and providing a truly 3D navigation experience with smooth flying. Second, to provide the main software components to easily design and implement 3D Virtual Globes based applications, on both iOS and Android platforms. Parallel Session 7.7 (Room 2105) Canada's Arctic SDI Initiative Moderator: Paula McLeod, Geoconnections, Natural Resources Canada Parallel Session 8.1 (Room 205A) Spatially Enabling Government VIII Spatially Enabled Risk Management: Models, Cases, Validation (32) Katie Potts, Abbas Rajabifard, Rohan Bennett, Ian Williamson [paper: refereed book chapter] Risk has a spatial nature. All events that result from risks have a link to a specific location or a factor in space. Understanding where on earth these risks are present allows for these risks to be mitigated, avoided, or managed. In order to manage the risks however accurate and timely spatial information about land and property is first needed. Historically, land administration systems have held this information, however, in recent years these systems have been superseded by other infrastructures that have the capability to capture and store information spatially. While these new systems offer the advantages of spatially enabled information, the authoritative information held within land administration systems is necessary for risk management. Land administration systems need to adapt to remain relevant in the 21st century, and coordination between these land administration systems and the new infrastructures is required to increase the ability of stakeholders to manage this information for risk management purposes. A framework targeted at this issue has been developed which proposes a spatially enabled approach for managing risks for citizens, government and wider society that takes into account the current information infrastructures (including land administration systems), the stakeholders, and the relevant risks that affect land and property. This framework results in the aggregation and dissemination of consistent information about risk to land and property to all stakeholders. So far the proposed framework has not been tested; however the recent floods in Queensland present an opportunity to apply the framework in the post event environment to determine whether the framework is appropriate within the Australian context. Towards a profile of the land administration domain model (LADM) for South Africa (133) Dinao Tjia, Serena Coetzee [paper: refereed proceedings article] The Land Administration Domain Model (LADM) is a spatial domain model for land administration, developed as an International Standard by the ISO/TC211 Geographic information/Geomatics. The standard provides a conceptual schema focusing on rights, responsibilities and restrictions affecting land (or water), and their geospatial information components. The aim of the standard is to improve communication by introducing standard concepts or vocabulary in the land administration domain. This is aimed at improving interoperability between cadastral or related information systems, thus improving exchange of land information between local, national and international organisations (both private and public) and information society at large. The LADM is not intended to be complete for any particular country, but rather aims to be the basis from which a country-specific model can be developed. Various research efforts have been undertaken to develop LADM for different countries and jurisdictions. For example, the Social Tenure Domain Model (STDM) supports areas falling outside formal tenure and cadastral systems, such as informal settlements and rural areas governed by customary laws and traditional practices. In Japan, Portugal, Indonesia, Tanzania, Trinidad and, Tobago and other European countries studies were carried out to investigate the use of LADM in their respective land administration systems. This paper reports on a research project about the development of an LADM profile for South African land administration Environmental Modeling for Geospatial Risk Assessment of Wind Channels in an Urban Landscape – Marina Bay, Singapore (212) Tian Kuay Lim, Haiyan Miao, Wei Ren Quah, Kee Khoon Lee, Durairaju Kumaran Raju The objective of this work is to develop a coupled atmospheric and urban model, based on geospatial and geographical information, to conduct a case study for the risk assessment of urban wind channels caused by severe atmospheric conditions in highly urbanized Singapore. A very high resolution mesoscale spectral model (MSM), based on the National Centers for Environmental Predictions (NCEP) Regional Spectral Model, has been adapted and calibrated for Singapore In particular,it is been applied to Marina Bay, to downscale hourly weather fields at 1km spatial resolution for Singapore. The downscaled weather fields are fed into a Computational Fluid Dynamics (CFD) urban model as the initial and boundary conditions for the urban domain. Leveraging on the CFD modeling, a full wind and pressure 3D gridded field can be obtained in the chosen area. Inputs of building geometry in Geographic Information System (GIS) form are required, and the two-dimensional outlines are then extruded based on the building heights to form three dimensional buildings. Information regarding wind gusts that was tuned using the MSM can then be identified at localized levels. To demonstrate the robustness of the MSM, we conducted studies on the performance of the MSM in simulating the various tropical weather conditions, ranging from monsoons to squalls, in Singapore. With the hourly weather fields at 1 km spatial resolution, the coupled MSM-CFD model is then used to study wind channels on urban biodiversity, particularly on tree failure, in the urbanized Garden City, Singapore. The MSM-CFD model indicated a correlation between the predicted strong wind gust area and tree failure recorded over the urbanized Marina Bay. Based on the performance of the MSM and the results of the study of wind channels on urban biodiversity using the coupled MSM-CFD model, we conducted a risk assessment study of urban wind channels over the urbanized Marina Bay caused by a tropical cyclone, Vamei, which traversed close to Singapore on the 27 Dec 2001. Automatic Building Model Generation with Different Level of Details (246) Eunju Kwak, Ayman Habib, Mohannad Al-Durgham Automatic Digital Building Model (DBM) generation methods have been intensively researched because of various applications such as urban planning, natural disaster management, and simulation for urban terrorism. Depending on the application, required level of detail for DBM is different. To generate more accurate DBM automatically, various sources of datasets, either single source or multiple sources, and different processing approach, data-driven, model-driven, or hybrid, have been proposed. Airborne LiDAR and imagery are one of the most popular sources to be combined to automate the process while increasing the accuracy of the final product. In terms of processing strategy, data-driven approaches do not make assumptions regarding the building shapes. It has an advantage that it can model any shape of buildings, but at the same time the little knowledge of the models makes its implementation complex. Model-based approach predefines different basic building models and the best fit model is adjusted using information derived from the existing data. While complex building models can be constructed by combining basic small sets of model primitives, the selection of the model and initial parameters requires human intervention. Therefore, in this research, building models with different levels of detail will be generated automatically using rectangular model only, by combining LiDAR data and imagery. The target of the proposed methodology is buildings whose rooftops are comprised with rectangles (e.g., L-shape, T-shape, U-shape, gable roofs and more complex building shapes) under the assumption that majority of the buildings in urban area belong to this category. The initial complex model is determined from LiDAR data segmentation and the sequential Minimum Bounding Rectangle (MBR) is applied to decompose buildings into sets of rectangles depending on the required level of detail. A decent approximation of the initial model parameters of several rectangles are given as the previous result. Each rectangular model is adjusted using edge pixels from the multiple images which adjust the unknown model parameters until the model fits to the edges extracted from the corresponding images. The different levels of MBRs will be adjusted sequentially. This model-image fitting increases the accuracy of the LiDAR derived models while keeping the vertical accuracy. The final building model is constructed using sets of adjusted rectangles. This approach generates majority of buildings in urban area automatically with high accuracy and can control the level of detail of the final product. Implementation of a Management and Information Administration Remote Sensing (108) Ruben Dario Mateus Sanabria, Maria Liseth Rodriguez Montenegro, IDEAM, Colombia The Natural resources knowledge requires of the deep study of diversity abundance and ecological distribution of the biota. That knowledge is supported by the basic investigation that supports the knowledge of their real biological richness, that facts are the subject of several national and international treatments for preservation, protection and biodiversity handling in balance with the human welfare. Through REDD (Reducing Emissions from Deforestation and Forest Degradation) project the Institute of Hydrology, Meteorology and Environmental Studies - IDEAM, acquired, stored and processed a large volume of satellite images, which have been classified and stored. Additionally, in its mission to do just as managed images from various remote sensing, and likewise should be organized and centralized to ensure access and availability. Several entities that have as raw material of research satellite images have the same problem, the management of big volumes of spatial information in raster format. This information, is as complex and specific as the sensors that produce them, it is necessary to guarantee their quality and integration with information in vector format in order to obtain new products and sub products as much as cartographic (thematic and base maps), as of base information (data base, statistics, among others) for searches carried by the Institute. This information is a key resource for mission processes and research conducted by the Institute in this regard and due to the increase of this information was necessary to build a raster data model, develop a tool that facilitates access to such information in the Web, implement processing options through the Web and build a management information system that consolidates a raster "Satellite Image Bank" in order to optimize their management and administration. In this context, the first advancement was the inventory of images, followed a raster data model, after the construction of a standard methodology and finally a procedures for storage and management of this information. Documentation of image data from remote sensing is essential as it allows to present and analyze this information based on space and time, resulting in added capacity and knowledge for making decisions about climate change, generating Project to Reduction Emissions from Deforestation and forest Degradation - REDD, conservation, use of biodiversity and human welfare. This management system and raster data management and development of Satellite Images Bank constitutes a useful tool that enables you to collect geographic data to organize, document and standardize. Its use reduces costs, duplication of efforts and allows the exchange of information, facilitate access and use is through the application developed, thus the quality of this becomes a decisive factor for use. This initiative constitutes the first experience at national and foundation for the construction of new processes related to the organization and management of this type of raster data or of spatial information. In addition, we will dock with the Colombian Spatial Data Infrastructure - ICDE. Parallel Session 8.2 (Room 204AB) GEOIDE Book session: Added Value of Scientific Networking (II) A short history of the GEOIDE Network Keith Thomson and Nicholas Chrisman Over fourteen years, the GEOIDE Network has set a standard for excellence in delivering results of research to user communities across disciplinary boundaries. This chapter provides a skeleton history of the organization, and acknowledges the many contributions that made this possible. Design and Implementation of Mobile Educational Games: Networks for Innovation Rob Harrap, Sylvie Daniel, Michael Power, Joshua Pearce, and Nicholas Hedley Research networks foster creativity and break down institutional barriers, but introduce geographic barriers to communication and collaboration. In designing mobile educational games, our distributed team took advantage of diverse talent pools and differing perspectives to drive forward a core vision of our design targets. Our strategies included intense design workshops, use of online meeting rooms, group paper and software prototyping, and dissemination of prototypes to other teams for refinement and repurposing. Our group showed strong activity at the university-centered nodes with periods of highly effective dissemination between these nodes and to outside groups; we used workshop invitations to gather new ideas and perspectives, to refine the core vision, to forge inter-project links, and to stay current on what was happening in other networks. Important aspects of our final deliverables came from loosely-associated network members who engaged via collaborative design exercises in workshops, emphasizing the need to bring the network together and the importance of outside influences as ideas evolve. Our final deliverable, a mobile educational game and a series of parallel technology demonstrations, reflect the mix of influences and the focus on iterated development that our network maintained. Collaborative Processes and Geo-Spatial Tools in Support of Local Climate Change Visioning and Planning Ellen Pond, Stephen Sheppard, Rob Feick, Danielle Marceau, John Danahy, Sarah Burch, Laura Cornish, Stewart Cohen, Majeed Pooyandeh, Nishad Wijesekara, David Flanders, Kristine Tatebe, Sara Barron GEOIDE NCE funding has enabled a decade of collaborative development of geospatial decision-support tools on sustainability issues, working with several regional and local governments, and multiple academic teams. Project strengths have been the innovative development and/or application of geospatial tools to climate change within collaborative processes, the on-going development of relationships between researchers and local communities, and longitudinal project evaluation, made possible through on-going, multi-year GEOIDE grants. The linked projects have led to increased local government awareness and capacity-building around climate change, the development of localized and downscaled climate change scenarios tied to local issues, local champion support, and early uptake of spatial planning tools and project outputs within communities. The flexibility of the Local Climate Change Visioning process has allowed the adaptation of geospatial tools to a range of contexts and thematic areas. It is one stream of activities that integrates climate change within the operations of municipal and regional governments. Working at the Intersection of Law and Science: Reflections on a Fruitful Geospatial Data Collaboration Teresa Scassa, Jennifer Chandler, Yvan Bédard, Marc Gervais It is relatively rare for largely scientific collaborations to involve researchers from law, and when this is done; their contributions are often peripheral to the goals of the main project which are to advance scientific or technological knowledge and to develop applied outcomes. GEOIDE Phase IV broke with this tradition by funding a science-led collaborative research project that put legal and ethical issues squarely at the forefront of the research agenda. In our project, the researchers sought to examine what legal considerations were relevant to the evolution of GIS-related practices, how technological innovations and standards should adapt to normative frameworks, and where law reform might be needed to advance the goals of GIS in a rapidly changing information environment. In this chapter, the authors reflect on the merits and challenges of such an approach, drawing from their own experience as legal researchers and as scientists within a predominantly science and technology-oriented research network. Connecting a Web for ‘In’novation: Lessons from the Participatory GeoWeb case-study laboratory Pamela Tudge, Renee Sieber, Yolanda Wiersma, Jon Corbett, Steven Chung, Patrick Allen, Pamela Robinson The GEOIDE Network has brought together a Geomatics research program with a strong focus on multi-disciplinary research. In this chapter, we present the experiences from our GEOIDE research team, ‘The Participatory GeoWeb for Engaging the Public on Global Environmental Change’ and our case study laboratories. We reflect on the influence of multiple research locations, institutions, and disciplines on the development of new relationships and new knowledge. We discuss the unlikely collaborations that play with traditional roles of the university and mix with the uniform disciplines of academia. Our collective experiences demonstrate how locations, technology and relationships play significant but different roles in collaboration. In the end, our network has sparked unlikely alliances and predictable hurdles, but it has also meant that everyone had the opportunity to be a student as we have collaborated towards innovation. Parallel Session 8.3 (Room 2103) Experiences & Case Studies V Irish Coastal Heritage Viewer Case Studies (237) Roger Longhorn, Gearoid Ó Riain, Beatrice Kelly, William Hynes, Maria Rochford [paper: refereed book chapter] This paper presents a case study for a project to develop a GIS based approach to enable the comprehensive audit and assessment of the heritage in the coastal areas of six Irish counties, led by the Irish national Heritage Council. The overall purpose for the resulting Coastal Heritage Viewer is to provide clearer understanding of the heritage and its significance, and to provide the basis for better future management. The project demonstrates how multiple data sources covering disparate themes, from different data owners, crossing local and regional (county) boundaries, can be integrated to aid conveying information to the public and decision makers at different levels of government. Based on web services standards, the resulting web viewer can be multi-purposed and readily expanded in the future to accommodate new data sources, providing new functionality for different applications and users. The paper follows the development process for the viewer and presents three case studies highlighting how the viewer aids decision makers in preparing various types of assessment reports, examining wind and renewable energy strategy options, and enabling integrated coastal zone management, among other aspects. A framework for evaluation of marine spatial data infrastructures to assist the development of the marine spatial data infrastructure in Germany (MDI-DE) - Accompanied by international case-studies (113) Christian Rüh, Peter Korduan, Ralf Bill In Germany the development of a marine data infrastructure currently takes place with the aim to integrate existing technical developments as well as merging information concerning the fields coastal engineering, hydrography and surveying, protection of the marine environment, maritime conservation, regional planning and coastal research. The funded parties in this BMBF-project are: - Federal Waterways Engineering and Research Institute (BAW, SP1 - “coastal engineering and coastal water protection”), - German Federal Maritime and Hydrographic Agency (BSH, SP2 - “protection of the marine environment”), - German Federal Agency for Nature Conservation (BfN, SP3 - “maritime conservation”) and - Professorship for Geodesy and Geoinformatics at Rostock University (GG, SP4 – “scientific accompanying research”). This undertaking is embedded in a series of regulations and developments on many administrative levels from which specifications and courses of action derive, for example INSPIRE (Infrastructure for Spatial Information in the European Community) as well as the Marine Strategy Framework Directive (MSFD), the Water Framework Directive (WFD). To keep track of all the things mentioned and to give the marine data infrastructure (MDI-DE) a conceptual framework scientists at the Professorship for Geodesy and Geoinformatics at Rostock University are building a reference model, evaluating meta-information systems, developing models to map processes for the generation of reports for instance and are also evaluating marine spatial infrastructures around the globe. Modelling is a necessity for the development of such a spatial data infrastructure in particular when lots of partners are involved and many requirements should be met. The reference model for the marine spatial data infrastructure of Germany (MDI-DE) is the guideline for all developments inside this infrastructure and is based on the Reference Model for Open Distributed Processing (RM-ODP) and other reference models for federal states and Germany as a whole. The reference model is composed of several submodels which focus on different aspects of the marine data infrastructure. Evaluating how other countries built their marine spatial infrastructures is of main importance, to learn from them where obstacles are and errors are likely to occur. To be able to look at other initiatives from a neutral point of view it is necessary to construct a framework for evaluation of marine spatial data infrastructures. There were five areas with several indicators identified which are based on spatial data infrastructure assessment approaches expanded to meet the requirements of the marine domain. When we take area A (Data) from the five areas identified (A - Data, B - Metadata, C - Services and Interfaces, D - Standards, E - Modelling) for instance the first indicator is “core datasets” describing what basic reference spatial data is covered by a country’s MSDI. The datasets which could be covered are bathymetry, shoreline and other maritime zones like EEZ, marine cadastre, coastal imagery, marine navigation, tidal benchmarks and benthic/nature conservation habitats. As international case-studies this paper will look at Australia’s Oceans Portal and Australian Marine Spatial Information System (AMSIS), United States of America’s Multipurpose Marine Cadastre and Digital Coast, Canada’s Marine Geospatial Data Infrastructure (MGDI), COINAtlantic, COINPacific and GeoPortal; and Ireland’s Marine Irish Digital Atlas (MIDA). A Web-GIS for Wetlands of Kerala using Open Source Geospatial Software Tools (103) Santosh Gaikwad, Narendra Prasad S Globally, wetlands are considered to be one of the most vulnerable ecosystems to degradation. The International Union for Conservation of Nature and Ramsar Convention have called upon various countries to take immediate steps to halt wetland degradation. One such step is the generation, dissemination, and sharing of wetland-related information and data amongst all stakeholders. Web media plays a vital role in taking this step. We demonstrate for the first time how such efforts can be taken for the wetlands in southern India. We use Kerala Wetlands in Kerala state, southern India as an example of how to formulate a wetland directory and build a web GIS (Geographic Information System) using Open Source Geospatial tools. (http://www.keralawetlands.in/webgis.html) The directory of wetlands of Kerala aims to provide extensive baseline information and data on the spatial distribution of wetlands in the state. These baseline data have been organized in four hierarchical administrative levels (District, Block, Municipality, and at the local self governance body called Panchayat) for the entire state. A web-based GIS has been developed in order to share the data with the stakeholders. To enhance the user experience, the web GIS has been dovetailed with the Google Maps application. Google Maps provides a highly responsive, intuitive mapping interface with embedded, detailed street and aerial imagery data and ability to zoom, pan, and view information pop-ups and overlays. We believe this effort is one of the first of its kind to aid in the conservation and management of the rapidly diminishing wetland ecosystems in India. Towards a Spatial Data Infrastructure (SDI) responsive to the needs of Integrated Coastal Zone Management: The GéoBretagne experience (France) (68) Françoise Gourmelon, Jade Georis-Creuseveau s, Matthieu Le Tixerant, Mathias Rouan Everywhere in the world, coastal areas are considered vulnerable socio-ecosystems that should be preserved (Cicin Sain, Knecht, 1998). Integrated Coastal Zone Management (ICZM) is a multi-stakeholder, multi-scale and inter-organizational approach that, in principle, guarantees sustainable development. While its interest is universally recognized, operation of the approach depends on methods and tools, especially for producing and sharing data. About 10% of the French metropolitan population lives in the coastal zones that represent 4% of France’s total land surface. In Brittany, the attractiveness of the coastal zones, involves multiple dynamics that can be the starting point of environmental degradation and use conflicts. Those findings led to creation in 2007 of the “Charter of Brittany coastal zones” (Conseil Régional de Bretagne), which is based on a system of observation, monitoring and predictive analysis of the coastal zone. The GéoBretagne(1) data-sharing platform conforms to the European INSPIRE directive and was jointly implemented in 2007 by the national and regional governments. Based on a partnership between public partners (national, regional, local), the goal of GéoBretagne is to provide the support for a collaborative organization, structured around "thematic groups" for the acquisition, sharing and public dissemination of data. Our research looks at a methodological approach to the implementation of a coastal and marine thematic group within the GéoBretagne platform. The local communities within the ICZM are in the center of our framework in order to address their needs in terms of spatial data repositories, reproducible indicators and network services. The research is based on surveys conducted with a sample of 15 local communities that have signed on to the “Charter of Brittany Coastal Zones” and that are involved in an ICZM project. Our findings make it possible: 1. to develop an “ideal” repository of spatial information according to partners’ requirements; 2. to improve ergonomics of the GéoBretagne platform for easy user-access; 3. to propose a set of reproducible indicators (people-relevant data) useful for understanding and monitoring changes affecting the coastal areas. The indicators have to meet Brittany`s needs and be consistent with the indicators recommended by European (Interreg IIIC South Deduce (2)) and national (Coastal Observatory (3)) initiatives. Based on their methodological procedures, the GéoBretagne platform is evaluated not only on its ability to provide spatial information, but also on network services for indicator calculation and output. 4. to improve the workings of the marine and coastal thematic group through its mission of monitoring, stewarding, sharing and training. From a theoretical point of view, this study contributes to the discussion and research into conceptual modeling of a coastal Spatial Data Infrastructure (SDI) and related social and institutional issues leading to the use or non use of SDI’s (Nedovic-Budic et al., 2011). (1)http://geobretagne.fr/accueil/ (2)http://www.deduce.eu/index.html (3)http://www.littoral.ifen.fr/ Parallel Session 8.4 (Room 2101) Spatially Enabling Citizens III Caribbean agrarian culture post emancipation: adding a spatial context in the journey to sustainable rural development (43) Tricia Melville [paper] Currently, there is a separation between the scientific epistemology of agricultural experts and colloquial knowledge of the locals; this separation has produced much undesired tension. This tension has plagued both agricultural and rural development projects in Trinidad and Tobago and is negatively impacting local food production. The use of Participatory Geographic Information Systems (PGIS) focuses on mitigating this divide and provides a platform upon which all key stakeholders can be involved as equal partners in development planning. This research seeks to determine the positive influences of an inclusive and participatory approach to rural development in Trinidad. It also proposes to assess the feasibility of rural development strategies conducted within a PGIS framework, in a Caribbean context. Incorporating a spatial component to rural development within the PGIS framework will create a common spatial language amongst stakeholders and in turn should add to the authority of local agricultural knowledge. The shift from top down to bottom up development promises increased sustainability and longevity of rural development programs. Caring for the Moraine Web-Service Development, Oak Ridges Moraine Foundation (87) Fred McGarry, Don Cowan, Paulo Alencar, Dan McCarthy To help protect and restore the Oak Ridges Moraine, the Caring for the Moraine website project (C4M) will encourage private land-owners to undertake ecosystem restoration projects on their properties and establish a learning network for agencies that collaborate in support of such projects. C4M will also enable 30 conservation-minded organizations to offer land-owners technical advice and access to various resources for projects on their properties. Using the C4M website, the Caring for the Moraine partners will be able to securely publish and post their news, events, exemplar projects and photos to a temporally and spatially searchable collaborative map of the Oak Ridges Moraine. Land owners will be able to access this information and also determine the land use designation, conservation priorities and available conservation resources for their properties. The research will focus on uptake by the C4M partners of COMAP’s Community Media services to enable groups within communities interested in social change in a particular geography to discover and learn from each other by using a mediated spatial social network. The ORMF will be the custodian of the C4M system and portal administrator with tools and processes for system and security administration. The ORMF will enable the Caring for the Moraine partners to publish and map news, events and photographs by participating in COMAP’s ‘Community Media’. Community Media enables secure mediated social networking (Profiles, Contacts, Forums, Forum Documents, Private Messaging etc.) and access to news, events and media publication tools at the group level to ensure that content is published with the authority of each of the C4M partner organizations. Funded by the Oak Ridges Moraine Foundation (ORMF), the C4M project brings together the Computer Systems Group (CSGUW) and the Faculty of Environment of the University of Waterloo and the Centre for Community Mapping (COMAP), a not-for-profit provider of software as a service. C4M will be developed using the CSGUW Web Informatics Development Environment collaborative geomatics research toolkit that is intended to empower domain experts in the social sciences to build low cost applications that enable citizens to take control of some of their own data, information and knowledge, collection, processing and management. The project is expected to generate an alpha system by December 31st, 2011, that will be introduced to and tested by the ORMF and the C4M partners in January, 2012. The resulting beta system will be further modified for operations. The project will be presented, discussed and demonstrated live at the 2012 GSDI conference. Harmonised Land Monitoring as an important tool for spatially enabled societies in Europe (153) Herbert Haubold As human pressure on the land surface continuously grows, land management related policy increasingly relies on a holistic and technically up-to-date support mechanism. Developed countries operate national land administration systems (LAS) including cadastres with a high degree of sophistication. These systems are rooted in the countries‘ histories, utilize a long record of experiences, and are operated by specialized data custodians. Currently, remote sensing based land use and land cover (LULC) mapping and monitoring activities undergo a rapid evolution, because on the one hand, very high spatial resolution (VHSR) imagery becomes more easily available and affordable, and, on the other hand, geographical object-based image analysis (GEOBIA) provides a novel technical approach to handle the complexity and amount of these data. In this way, novel land information systems (LIS) are created which deliver conditioned spatio-temporal time series of land use and land cover (LULC). The potential this development offers is not yet realized because generally, LAS and LIS show an overall institutional and technical separation. However, integrated systems are a prerequisite to address future demands on spatial data infrastructures (SDI) that must be met to secure sustainable development. HELM (Harmonized European Land Monitoring) is a project which combines public authorities responsible for land monitoring (primarily environment and mapping agencies) from seventeen European countries, and a number of SMEs. The aim of HELM is to make European land monitoring more productive by increasing the alignment of national and sub-national LIS and enabling their integration into a coherent European land monitoring system. In a bottom-up fashion, pan-European land monitoring products will be derived from aggregated and generalized national data sets. These pan-European products will best fulfill European users‘ needs by fully taking advantage of the countries and regions‘ detailed knowledge. Likewise, inputs common to the data production within all or many countries will be supplied from a central source to enable a coherent European land monitoring system characterized by economies of scale (core service concept). As an outcome, HELM supports spatially enabled societies/governments (SES/SEG) in the participating countries in that it contributes to resolving three major issues: 1) Harmonization of sub-national and national LIS among each other fosters informed decision making at the different jurisdictional levels, that is sub-national, national, European and global. This includes LULC data models, nomenclatures, synchronization, grid approaches and the like, thereby taking advantage of the potential of GEOBIA approaches. 2) From a socio-technological perspective, the many inconsistencies between land related data sets are addressed, thereby focussing on social, legal, policy, and institutional issues. In a participatory approach the project creates acceptance for common modes of data handling, increased collaboration and exchange of interoperable data among its participants. 3) HELM works towards integrated SDI systems which merge multi-source data to provide comprehensive representations of the state of the natural and the built land surface and changes thereof, without attempting to erode the autonomy of the various data holders. A web-based decision support system to aid stakeholders’ evaluation of different land development scenarios in the Elbow River watershed in southern Alberta (174) Majeed Pooyandeh, Danielle Marceau Due to the rapid urbanization in the Elbow River watershed in southern Alberta and its effects on the quality and quantity of water, developing a simulation tool to encourage public participation about land development and its impact on water resources is important. This paper describes a web-based decision support system that includes an agent-based model (ABM) to provide stakeholders with the ability to compare different land development scenarios, discover how their plans are being evaluated by other stakeholders, and negotiate in attempts to find the best location of that land development within the watershed. The stakeholders represented as agents include citizens, planners, developers, industries, and different government and non-profit organizations. The model proposed in this study contains three main modules. The “Data management module” employs JavaServer Faces (JSF), Geoserver and Openlayers to provide stakeholders (users of the system) with a password protected web page through which they can view their organization’s data, perform GIS functions and submit a land development plan. All data are stored in an open source PostgreSQL database which contains stakeholders’ criteria to evaluate a land development scenario in the form of maps and tables. This database is equipped with PostGIS, a plugin that provides spatial functionalities for the database. The second module is the “Analysis module”. Upon the request of the user, a land development plan is submitted to this module; using PostGIS functionalities, the agents perform several spatial analyses to evaluate each scenario by measuring its compatibility with their values and preferences. The “Negotiation module” uses the libraries of Repast Simphony, a popular ABM toolkit, to simulate the negotiation process of stakeholders. The agents negotiate over a development plan to find the best location for that plan, considering each other’s values and preferences gathered through interviews. First they prioritize their preferences using a fuzzy AHP approach and then based on these preferences, highly ranked locations are selected. The ABM then investigates the highly ranked locations of all agents to find a common area which will be outputted as the result of the negotiation. The system implemented in this study is innovative in several ways. First it combines several advanced geospatial data handling, modeling and visualization techniques within a common tool set. Second, while it uses a sophisticated modeling approach to simulate the stakeholders’ negotiation process, the modeling complexity is masked through the interactive and user-friendly web interface, designed to encourage public participation. Parallel Session 8.5 (Room 2104A) SDI Trends, Aspirations and Visions Standards Trends: Unlocking the Power of Spatial Information in Policy, Analysis and Decision Making Mark Reichardt, President OGC Mr. Reichardt discusses trends in geospatial / location standards and best practices that offer the promise for expanding the use of place-based information and applications for improved policy, analysis and decision making. He includes discussion on the pace of technology change, trends such as the Internet of Things, and the impact of innovation in rapidly evolving technology and consumer markets on standardization. Mr. Reichardt also addresses advancements in work to establish location interoperability across the broader IT technology landscape through the collaborative work across Standards Development Organizations (SDO) such as OASIS, the W3C, IETF and others. Emphasis is placed on the roles and contributions of industry, government and academia to identify and unite on open standards and market / domain specific best practices to accelerate the incorporation of spatial data into our everyday activities related to addressing a range of opportunities and issues. Emerging Trends for Spatially Enabling Society Mark Cygan, Esri New implementation patterns for spatial data sharing are revolutionizing how we geospatially enable communities and ultimately large parts of society. SDI has expanded from an initial need to publish, discover, exchange and acquire data. It is now rapidly evolving to deliver “on the fly” integration of data from a multitude of sensors to create information products that support planning, analysis and decision-making in near real time. This is realized through a geospatial platform providing information to desktop, web and mobile devices from an enterprise and/or Cloud environment. Does more collaboration lead to better Spatial Data Infrastructures? Watse Castelein (Universidad Politécnica de Madrid), Arnold Bregt, Ukasz Grus (Wageningen University Centre for Geo-Information) Several authors have suggested that stakeholder participation and collaboration play a key role in the the development of Spatial Data Infrastructures (SDIs) (see e.g Warnest 2005 and Budhathoki et al 2008). A better understanding of the mechanisms behind stakeholder collaboration within SDIs is therefore essential. However, research about the factors that stimulate or inhibit stakeholder collaboration within SDIs and how these factors interact is still partial at the most. Based on collaboration theory we proposed a model to examine collaboration within SDIs. This model identifies patterns and features of evolving collaboration stages within SDIs. In this paper we apply the proposed collaboration model to evaluate stakeholder collaboration within 27 European national SDIs. Our results indicate substantial differences in collaboration levels of the SDIs. To further analyze the importance of stakeholder collaboration for the development of SDIs, the state of play approach (Vandenbroucke et al 2008) was used to define the degree of development of the national SDIs. This made it possible to analyse the relation between stakeholder collaboration levels and SDI development. Our analysis indicates a high correlation between collaboration levels and the degree of development of SDIs. The most developed national SDIs are based on high collaboration levels, with well structured relation and roles, sharing of human - and information resources, integration of work processes and value creation and well defined formal and informal policies. Whereas collaboration in less developed SDIs are less integrated and structured. Our results indicate that well developed SDIs require a high level of stakeholder collaboration. Refereences: Budhathoki, N.R., Bruce, B., and Nedovic-Budic, Z., 2008, Reconceptualizing the role of the user of spatial data infrastructure. GeoJournal, 72(3-4), 149-60. Vandenbroucke, D., Janssen, K., Van Orshoven, J., 2008. INSPIRE State of Play: generic approach to assess the status of NSDIs. In: Crompvoets J., Rajabifard A., van Loenen B., Delgado Fernandez T. (Eds.), A multi-view framework to assess spatial data infrastructures, Chapt. 8 (pp. 145-172), The University of Melbourne, Australia. Warnest, M., 2005, A Collaboration Model for National Spatial Data Infrastructure in Federated Countries. PhD Thesis. The University of Melbourne. tba 8.5d [abstract] [slides] Parallel Session 8.6 (Room 205C) 3DGeoInfo: 3D Model Construction + GIS Data Analysis Sleeves for Reconstruction of Rectilinear Building Facets Marc van Kreveld, Thijs Van Lankveld and Maarten de Rie We introduce the concept of(\alpha,\delta)$-sleeves as a variation on the well-known$\alpha\$-shapes. The concept is used to develop a simple algorithm for constructing a rectilinear polygon inside a plane; such an algorithm can be used to delineate a building facet inside a single plane in 3D from a set of points obtained from LiDAR scanning. We explain the algorithm and analyse different parameter settings on artificial data.

Reconstruction, Storage and Application of 2.7D models
Ben Gorte and Jochem Lesparre

For many applications of 3d GIS, especially in built-up environments, the traditional 2.5d extensions available in 2d GIS software e.g.to represent terrain elevation, are not sufficiently powerful. The usual shortcoming is that vertical elements (walls) cannot be explicitely represented. A full 3d solution, on the other hand, imposes additional requirements on data acquisition and causes additional complexity, and may yet be not optimal for spatial analysis. In the current paper we extend the TIN data structure, which is capable of representing 2.5d information about terrain elevation and building roofs, with the possibility to include walls. In addition we introduce methods to generate and refine models and we present tools for further processing of the data, including conversions to VRML, quadtree and raster.

A 3D-GIS Implementation for Realizing 3D Network Analysis and Routing Simulation for Evacuation Purpose
Umit Atila, Ismail Rakip Karas and Alias Abdul Rahman

The need for 3D visualization and navigation within 3D-GIS environment is increasingly growing and spreading to various fields. When we consider current navigation systems, most of them are still in 2D environment that is insufficient to realize 3D objects and obtain satisfactory solutions for 3D environment. One of the most important research area is evacuating the buildings with safety as more complex building infrastructures are increasing in today’s world. The end user side of such evacuation system needs to run in mobile environment with an accurate indoor positioning while the system assist people to the destination with support of visual landscapes and voice commands. For realizing such navigation system we need to solve complex 3D network analysis. The objective of this paper is to investigate and implement 3D visualization and navigation techniques and solutions for indoor spaces within 3D-GIS. As an initial step and as for implementation a GUI provides 3D visualization of Corporation Complex in Putrajaya based on CityGML data, stores spatial data in a Geo-Database and then performs complex network analysis under some different kind of constraints. The GUI also provides a routing simulation on a calculated shortest path with voice commands and visualized instructions which is intended to be the infrastructure of a voice enabled mobile navigation system in our future work.

3D Geospatial Modelling and Visualization for Marine Environment: Study of the Marine Pelagic Ecosystem of the South-Eastern Beaufort Sea, Canadian Arctic
Jonas Sahlin, Mir Abolfazl Mostafavi, Alexandre Forest, Marcel Babin and Bruno Lansard

Geospatial modelling of the marine pelagic ecosystem is challenging due to its dynamic and volumetric nature. Consequently, conventional oceanographic spatial analysis of this environment is in a 2D environment, limited to static cutting planes in horizontal and vertical sections to present various phenomena. In this paper, we explore the contribution of recent 3D development in GIS and in scientific visualization tools for representation and analyses of oceanographic data sets. The advantages of a 3D solution are illustrated with a 3D geospatial voxel representation of water masses distribution in the southeastern Beaufort Sea (west of the Canadian Arctic).

Parallel Session 9.1 (Room 205A)
Spatially Enabling Government IX

A Spatial Perspective, Victoria's Black Saturday Fires, Subsiquent Floods and the Future (92)
David Williams, Abbas Rajabifard, Ged Griffin

This paper/presentation, whilst stand alone, compliments the paper of Ged Griffin. It details the spatial initatives during the emergency response to the 2009 Victorian 'Black Saturday' fires in which 173 people died and some of the lessons learnt. The paper goes on to describe the 2010-2011 flood events that effected large areas of Victoria and some high profile preventative risk mitigation conducted to protect rural communities. These lessons have spatial implications for both national and international communities, patricularly in the emergency management (preperation,response and rehabilitation)continuums.

Geomatics in the Yukon (118)
Emily Slofstra

The Yukon has a very large landmass with a population of only 30,000 people which provides challenges as well as opportunities for land-use planning (LUP), resource and environmental management (REM) and within the ever-growing field of geomatics. Digital mapping is particularly useful for LUP and REM, and technology has been improving over the last 30 years to provide more and more possibilities for analyzing how the environment has been impacted by development, or for preventing further damages. In this study a triangulated method was used and through interviews, document review and a case study of the Yukon, it is possible to see how geomatics is currently used in a resource-based territory like the Yukon, as well as how data management and analysis have changed since the 1980s. The Yukon was examined as a case study where information management is confined to a region with a large landmass and a small geomatics community. 27 participants were interviewed on the uses and strengths of geomatics in their work, barriers to successful implementation of spatial data technologies and analyses, and the future of geomatics in the Yukon. Participants were also asked about the development of GIS and LUP over the last thirty years, and the answers complemented materials published by government departments and consultants since 1982 about land and resource information management. The results showed that there has been a continual call for improved coordination of efforts to manage imagery and base data acquisition, as certain types of information are not sought by the current coordinating agency, but are needed by a variety of professionals. Other key barriers included concerns about funding, usually due to high costs of software and data collection, and that Yukon geomatics professionals spend most of their time doing data massage instead of potentially more useful and more advanced analyses. Key predictions included an increase in the use of web-mapping applications, as well as a greater integration of data types, particularly merging raster and vector formats. The results of this study will be useful to geomatics professionals in the Yukon to improve their processes of acquiring, sharing and utilizing data. It also provides information about the types of projects that have been completed using sophisticated analysis or modeling techniques, for instance, modeling cumulative effects of mining exploration or prime tourism locations across the territory. The study also includes a discussion of Government, First Nations, Environmental organizations, and Industry/Independent consultant sectors, and how each sector impacts and is impacted by the field of geomatics. Geomatics in the Yukon has followed technological trends since the 1980s, but with a unique set of challenges that have sometimes been met and overcome, but are also often still relevant today.

Towards a demand driven Spatial Data Infrastructure (129)
Kees de Zeeuw, Joep Crompvoets

Our modern society is full of spatial problems that become more and more complex. In order to solve these complex problems, there is a high demand to high-quality information, excellent information models, services and software tools, etc. However, the current Spatial Data Infrastructures (SDIs) are not able to meet fully these demands (since their developments were more supply driven). In order to set up successful SDIs in the future, it is likely that the development has to be more demand driven. SDIs are built to facilitate the access to data and services. So far, little attention is given to the actual and desired use of SDIs for solving spatial problems during the first stages of building SDIs. Most effort is currently spent to defining data, metadata, standards, and technology. Based on a survey focusing on the state of play of SDIs in Europe, it is clear that the current SDI-developers have very limited knowledge of the actual, desired and potential use of an SDI. In other words, the development of the current SDIs are not based on the needs of the actual, desired and/or potential users. It is strongly recommended to put more emphasis on the user requirements while developing SDIs. At the end, this will likely enhance the effectiveness of the developed SDIs. This paper suggests two ways to include the user requirements in the further developments of SDIs. First, a user requirements framework similar to the people, profit, and planet approach in economic development projects is introduced leading likely to improved and more effective SDIs. Second, the involvement of problem solving users in the development is strongly proposed (as has been done in the definition study for a Geospatial Shared Service Organisation in the Netherlands). These suggestions could contribute towards more demand driven SDI and so tackle the complex spatial problems in a better and more sustainable way.

SDI Situation in Islamic Republic of IRAN
(131)
Hadi Vaezi, Peyman Baktash, Ali Javidaneh

Parallel Session 9.2 (Room 204AB)
GEOIDE Contributions to urban planning and sustainable cities

PlanYourPlace – Phase I: Analysis of Requirements for a Participatory Urban Planning Platform (47)
Andrew J.S. Hunter, Coral A.M. Bliss Taylor, Stefan Steiniger

The planning and execution of urban development projects should involve citizen participation. Citizen participation is essential if the needs of the population are to be addressed when undertaking public development projects, and participation is essential if private construction projects are to be accepted by the residents that live adjacent to and within such projects. A traditional form for receiving citizen feedback for planning projects is the organization of community open house meetings, and Charretes. As a new – complementary – form of citizen engagement the planning and participatory GIS literature proposes the use of Web 2.0 technologies to facilitate engagement with a broader range of citizens. The PlanYourPlace project was established to develop such a participatory planning platform for communities within the City of Calgary. In particular the platform should enable citizens to voice their opinions, and facilitate discussion of urban development scenarios between citizenry, city planners, and decision makers. Additionally, the platform should enable users to rank and actively design different development scenarios. Consequently social network functions and (geo-) design tools will form an essential part of the platform. The first phase of the platform development addresses naturally a requirements analysis to inform platform design. Three different aspects were considered during the requirements analysis: (i) legal aspects, (ii) functional needs, and (iii) technical implementation. A literature survey based of participatory GIS, planning, and spatial data infrastructure literature was conducted to develop a framework for the platform that embraces a broad range of subject matter: (i) legal aspects: 1- citizen identity, 2 - privacy, 3 - ownership, 4 - bylaws and policies for planning, 5 - licenses (e.g. for data); (ii) platform functional objectives for citizens and planners: 1 – informing, 2 – discussing, 3 – ranking, 4 – sketching/modification of plans, 5 – sharing of documents, 6 – evaluation/impact assessment of planning scenarios; (iii) technical implementation requirements: 1 – data, 2 – user interface development for visualisation, social networking and sketching, 3 – web services for data retrieval and assessment model integration. Our presentation at the GSDI conference will discuss these requirements in detail and outline their effects on the design of a participatory planning platform.

A Web-SDSS to evaluate site influences on residential solar energy potential (249)
Rob Feick, Andrew Blakey

Web-based services are being used more widely to provide experts and non-experts alike with access to geospatial information as well as tools for selected forms of spatial analysis and geoprocessing. Within this environment, new opportunities are emerging for spatial decision support systems (SDSS) concepts and methods to be focused upon the needs of both larger and more diverse user communities and a wider array of spatial decision issues. Central to the extension of SDSS to web-centric context has been an increased emphasis on assisting users with problem exploration, learning, and iterative approaches to problem solving as opposed to the field’s historical focus on the choice dimensions of decision making. This presentation describes a web-SDSS that was developed to provide households with a map-centred approach for exploring the feasibility of rooftop solar panel installations on their homes. Interest in small-scale photovoltaic installations has grown recently in response to government incentive programs (e.g. Ontario’s Green Energy Act) that support renewable energy technologies and concerns about the environmental and economic impacts of expanding conventional power generation facilities. With urban environments, the feasibility of investments in rooftop photovoltaic technology is subject to a number of uncertainties that are primarily site-specific in nature. In particular, while meso-scale information resources and factors (e.g. government grants and policies, average solar radiation at different latitudes, etc.) apply equally within a jurisdiction, micro-scale variables (e.g. topography, shading from nearby buildings and trees, roof characteristics) can vary considerably from one property to another and are typically known only informally by the public. The web-SDSS described here aims to bridge this gap by permitting non-experts to visualize the impacts of local influences on solar modelling results for individual buildings and across a study area in the northwest of Toronto comprising some 1300 residential buildings. Users are also able to explore the estimated economic and environmental returns associated with different solar panel configurations (i.e. location and system size) that they design. In an effort to minimize the amount of information that is required of users, aerial Lidar data is used derive key height, slope and aspect data for individual buildings. In addition to demonstrating some of the capabilities of the tool, some of the broader challenges of this type of web-SDSS are outlined.

A spatial-analytical investigation of urban spaces of graffiti and interpersonal violence (54)

Interpersonal violence (IPV) is the third-leading cause of injury in developed countries, representing a significant burden on society (Krug, et al. 2000). Features of the built urban environment are spatially correlated with injury incidence (Schuurman, et al. 2009). Often associated with high-risk neighbourhoods is graffiti, an activity embedded in artistic, political, and criminal subcultures and the subject of divisive controversy (Ferrell 1995). While no obvious causal relationship exists, linkages between the actors, cultures, and spaces of violence and graffiti are found in the literature (e.g., Docuyanan 2000; Doran & Lees 2005; Halsey & Young 2006; Lindsey & Kearns 1994). This work presents a mixed spatial-analytical and qualitative investigation of the interplay between graffiti and IPV in the City of Vancouver. Spatial queries found that 71.8% of all IPV incidents occurred within 100 metres of a graffiti incident. Kernel Density Estimation was used to interpolate the spatial patterns for both graffiti and interpersonal violence, and to identify hotspots. These hotspots showed significant overlap, with some anomalies in the Downtown Eastside. Linear regression showed a very strong, significant correlation (r2=0.92, α=0.01) between graffiti and IPV hotspots. The residuals were normally distributed, but were found to spatially cluster, suggesting neighbourhood-level variations in this relationship and the necessity for multi-scale and temporal analyses. Our findings suggest that urban spaces of graffiti and interpersonal violence overlap on a city-wide scale, with neighbourhood-specific nuances, and may prove to be a step towards the ultimate goal of informing targeted policy interventions to reduce the burden of injury and make our living spaces safer. We suggest that the presence of graffiti can be an indicator for proactive, targeted policy interventions to curb violence and reduce the associated burden of injury. References Docuyanan, F. (2000). Governing graffiti in contested urban spaces. PoLAR: Political and Legal Anthropology Review 23(1): 103-121. Doran, B.J., & Lees, B.G. (2005). Investigating the spatiotemporal links between disorder, crime, and the fear of crime. The Professional Geographer 57(1): 1-12. Ferrell, J. (1995). Urban graffiti: crime, control, and resistance. Youth Society 27: 73-92. Halsey, M., & Young, A. (2006). 'Our desires are ungovernable': writing graffiti in urban space. Theoretical Criminology 10: 275-306. Krug, E. G., Sharma, G. K., & Lozano, R. (2000). The global burden of injuries. American Journal of Public Health 90(4): 523-526. Lindsey, D. G., & Kearns, R. (1994). The writing's on the wall: graffiti, territory and urban space in Auckland. New Zealand Geographer 50(2): 7-13. Schuurman, N., Cinnamon, J., Crooks, V. A., & Hameed, S. M. (2009). Pedestrian injury and the built environment: an environmental scan of hotspots. BMC public health 9: 233-243.

WIKIGIS for Visualizing Urban Futures of Canadian Urban Regions (219)
Stéphane Roche, Robert D. Feick

The convergence of Web 2.0 and geospatial technologies has dramatically changed how geographic information are perceived and used, particularly as a growing number of users, including the general public, have new opportunities to collaboratively create both maps and their underlying spatial data. There is an emerging need for new methods for tracing the lineage of maps and data as well as the processes in which they are produced and used in this new environment of mass collaboration and co-production. Within the realm of collaborative and participatory planning, for example, it is important to be able to answer questions such as: “Who has made changes in previous versions of the plan?, When and where these changes were made?, What justifications did individuals provide to explain their changes?” Developing a WikiGIS application that enables tracking the history of spatial data edits and promotes collaborative working is therefore relevant. We focus here on the design of a new type of online collaborative mapping application based on the operating principles of Wikis, both in terms of creating and enriching location-based map content and in terms of maintaining a documented history of all the stages of its development. In the WikiGIS, location-based content can be modified, enriched, updated and deleted by any user. All of the users’ actions are saved, “versioned” and dynamically accessible via the content history. By storing all actions, a map (or data project) becomes “live” media with an associated “story” that illustrates, for example, its progress over time or allows users to “rewind” the process to help users to understand its genesis. The core component of wiki approaches is the process itself (data generation, map design), rather than data which are to some extent a manifestation of the contributors/actors’ points of view and objectives. Contrary to most existing mapping mashups, in which users have limited abilities to change the underlying database, the architecture proposed for the WikiGIS ensures the traceability of the spatio-temporal evolution of each geographic object created, and provides dynamic access to the corresponding history file. Consequently, data coproduction could be achieved iteratively rather than in a cumulative way as in traditional GIS approaches. This paradigm shift toward geocollaboration will be examined in a Geoide-Neptis Foundation project with goal of facilitating participatory building and assessment of alternative urban futures. Cartographic representation thus becomes a tool to read the dynamics at work in the production of spatial knowledge, and to keep track of the history of spatial representations produced by different actors (e.g. citizens, land-planners, partners or elected representatives) in a land planning process. A WikiGIS thus makes it possible to support both integration and differentiation dynamics which are core to geocollaboration. In this way, it offers a new avenue for building consensus that is based on an explicit recognition of stakeholders’ unique contributions and perspectives.

3DTown: The Automatic Urban Awareness Project (235)
James Elder, Gunho Sohn, Claire Samson, Eduardo Corral Soto, Ron Tal, Larry Wang, Tara Jones, Ravi Persad, Chao Luo

The 3DTown project is focused on the development of a distributed system for sensing, interpreting and visualizing the real-time dynamics of urban life within the 3D context of a city. At the heart of this technology lies a core of algorithms that automatically integrate 3D urban models with data from pan/tilt video cameras installed at fixed locations or deployed from unmanned aerial vehicles, environmental sensors and other real-time information sources. A key challenge is the “three-dimensionalization” of pedestrians and vehicles tracked in 2D camera video, which requires automatic real-time computation of camera pose relative to the 3D urban environment. Here we report results from a prototype system we call 3DTown, which is composed of discrete modules connected through pre-determined communication protocols. Currently, these modules consist of: 1) A 3D modeling module that merges image data from different sources and allows for the efficient reconstruction of building models and integration with indoor architectural plans; 2) A GeoWeb server that indexes a 3D urban database to render perspective views of both outdoor and indoor environments from any requested vantage; 3) Sensor modules that receive and cache real-time data; 4) Tracking modules that detect and track pedestrians and vehicles, in urban spaces and access highways; 5) Camera pose modules that automatically estimate camera pose relative to the urban environment; 6) Three-dimensionalization modules that receive information from the GeoWeb server, tracking and camera pose modules in order to back-project image tracks to geolocate pedestrians and vehicles within the 3D model; 7) An animation module that represents geo-located dynamic agents as sprites; and 8) A web-based visualization module that allows a user to explore the resulting dynamic 3D visualization in a number of interesting ways. To demonstrate our system we have used a blend of automatic and semi-automatic methods to construct a rich and accurate 3D model of a university campus, including both outdoor and indoor detail. The demonstration allows web-based 3D visualization of recorded patterns of pedestrian and vehicle traffic on streets and highways, estimations of vehicle speed, and real-time (live) visualization of pedestrian traffic and temperature data at a particular test site. Having demonstrated the system for hundreds of people, we report our informal observations on the user reaction, potential application areas and on the main challenges that must be addressed to bring the system closer to deployment.

Parallel Session 9.3 (Room 2103)
Experiences & Case Studies VI

Functions and virtues of spatial data observatories, the French case (163)
Gregoire Feyt, Emmanuel Roux

This article is based on a research study conducted in France from 2008 to 2011 for the Agence Interministérielle pour la Planification Spatiale et l'Attractivité Régionale (DATAR), a French government agency. Qualitative and quantitative analyses are carried out on information obtained from a survey of 185 spatial data observatories, scattered throughout France, to gain insights into the different types of observation facilities that have been set up over the last decade, and the reasons for their proliferation. The study also seeks to determine the different functions of spatial observatories, the actual use made of data obtained, and the changes affecting these characteristics. The scientific hypothesis underlying this work is that the study of the way in which spatial data observatories are set up, used and supervised constitutes a particularly rich and effective approach to better understanding the relationship between territorial knowledge and public action (and vice-versa) as well as the changes affecting this relationship. The study is therefore situated in a transitional zone between political science, spatial sciences, and information sciences.

Factors affecting Geographic Information Systems implementation and use in Healthcare Sector: the Case of OpenHealthMapper in Developing Countries (146)
Zeferino Saugene, Márcia Juvane, Inalda Ernesto [paper: refereed book chapter]

Geographic Information Systems are one of the most widely information technologies used to assist the management of spatial related problems such as those of healthcare practitioners in developing countries. As a follow-up of the challenges faced while customising OpenHealthMapper in Malawi and Guinea Bissau, the paper uses the case of Mozambique to highlight significant differences between the ways geospatial stakeholders approach the issue of geodata. Empirical data illustrates that boundary complexity and weak coordination are behind the problems encountered in the geodata. With an emphasis on geodata needed to perform healthcare analysis, the article analyses the role of boundary objects and how their quality is influenced by the tensions between the communities managing them. The analysis demonstrates how boundary objects are devices that maintain relationships and also creates tensions. Based on Carlile’s knowledge integration framework the development of an integrated geodata management approach is discussed, i.e., the paper suggests a management mechanism focused on the notion of transfer, translation and transformation which is used to conceptualize the role of boundary objects as elements that helps to reduce the boundary complexity and strengthen community members’ coordination.

Role of SDI in index overlay Modeling and fuzzy logic in GIS to predict Malaria outbreak (258)

Malaria is one of the contagious diseases which have high fatality numbers in the world and Iran can't be exempted from this fact. As Hormozgan province, is a suitable area for malaria, Bandarabbas County was selected as case study in this research. It is noticeable that anopheles mosquito's eggs hatch after seven to twenty one days of laying eggs, so we can predict outbreak of Malaria, month by month, during a year in different regions by using parameters which are involved in catching Malaria. The main aim of this research is to prepare the mathematical models to predict the outbreak of Malaria by analytical function of GIS that the risky areas would be specified and then controlled with the environmental considerations. In this research, data gathering was done considering affecting factors in the epidemic of Malaria and needful factors (such as temperature, humidity, vegetation, elevation, hydrological features ,…) in order to model the incidence of this disease. Then, the factor maps were provided and reclassified. After that, the factors were weighted using pair wise comparison which is a part of analytical hierarchy process (AHP). At last, once, prediction model was prepared through the index overlay model and then map of risky area was created. Once more, each of factors was fuzzified using fuzzy membership functions. Next, the map of risky area was created via applying the induced rules and fuzzy operators. Finally, the obtained results are comprised with reported population and malaria statistics. In this topic, SDI can be useful for Database sharing in distributed method and Metadata will be helpful for management of malaria outbreak.

Administration and Management of Georeferenced Information in the Institute of Hydrology, Meteorology and Environmental Studies of Colombia - IDEAM (50)
Ruben Dario Mateus Sanabria, Maria Liseth Rodriguez Montenegro, IDEAM, Colombia

The organization and structure of the information inside a GIS, is one of the principal priorities in reason to it represents particular trends and patterns of the data. This structure consists in to organize the same type data sets, grouped and arranged so that represents a specific trend. It main objective is to provide a logical scheme for data handling a publication. These models correspond to a real world abstraction and they must be organized with standards designed to let analysts work efficiently.
The implementation is made in order to have a dynamic system for efficient and optimal flow of information. Furthermore, this system intends to incorporate to the analysts the culture about data management for the normalization of the cartography available, it will allow the opportune decision taking processes. The use of this kind of tools lets to the administrators to keep control over the spatial data with a minimum investment of time, financial and human resources.
These structures are part of an administrative system divided in tools that logical and coordinated interact among them with the objective of to produce and analyze geographic information to accomplish several user requirements. Before consolidating these structures, organization and a GIS it is necessary to clarify the objectives and requirements that will improve the information system.
All data must be supported by standards and protocols for their correct acquisition, processing, interchange, diffusion, storage and protection, in this sense, since the implementation of the GIS model of the Institute of Hydrology, Meteorology and Environmental Studies of Colombia - IDEAM, is been generated several standards and protocols for spatial information handling in each Institute’s project. Some of those standards are the following:
• Standard for information organization
• Standard for format handling
• Reference and coordinate system standard
• Standard for working scales handling
• Standard for map presentation
• Standard for documentation of geographical data conjoint
• Standard for project documentation
• Back Up and file security protocol
• Information security policies
The results obtained with the implementation of these tools are: consolidated and organized information, best performance, time and space optimization, improvement in the processes of information handling, more information in the data model, implementation of best practices for information exchange, documentation and divulgation; it is important to mention the this is the first step toward the consolidation of an institutional georeferenced information management system. In addition, we will dock with the Colombian Spatial Data Infrastructure - ICDE.

Evaluating effectiveness in Semantic Spatial Data Infrastructures (36)

Improving the quality and effectiveness of queries and information retrieval in a SDI is an important issue, considering the incremental “casual” and traditional users of common SDI. One of the trends of the web is the Web Semantic, which provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. This paper takes advantage of the Web Semantic to model a Semantic SDI based on the Reference Model RM-ODP, detailing each of its perspectives (business, information, computation, engineering and technologic) to obtain a more comprehensive vision of new semantic Spatial Data Infrastructures. Finally, a measurement of usability to demonstrate its effectiveness is assessed and compared with non-semantic ones to show the actual
contributions of the model.

Parallel Session 9.4 (Room 2101)
Education and Capacity Building

Contributions to the SDI from Latin American Universities - Some Undertaken Initiatives (247)
Mabel Álvarez, Villie Morocho, Andrea Morales, Zulema Beatriz Rosanigo, Gwyn Jones, Lara López Álvarez

The evolution of the SDI, especially in spatially enabled societies, focuses on society and not solely the traditional users of geospatial information. The dramatic growth and availability of geospatial data, products and services through the Web requires that people have a basic knowledge of ICT skills and Web 2.0 tools in order to make the most of them. In summary, thinking of society in a broad sense, and considering that people have basic knowledge of ICT and Web2.0 tools, with the aim that they benefit from improved data, products and services that SDI provide, necessitates that issues such as the following are considered: -The integration of society from the point of view of digital literacy. -Contributions that can be made by formal, non formal and informal education. -The principal features of the places where the intention is to undertake certain strategic actions. Reference is also made to the role of the International Geospatial Society – IGS. This paper focuses on the Latin American context and discusses some initiatives that are developing or have developed, while refering to the following: -The creation of the Latin SDI Community, which brings together researchers from Argentina, Bolivia, Chile, Colombia, Costa Rica, Ecuador and Peru, collaborating in academic and scientific aspects on Spatial Data Infrastructure. -The principal results of training activities undertaken in the context of Project A/024521/09 "Training and knowledge management with Web 2.0 tools for university education, administrative and educative management, and continuing professional development in Argentina, Chile and Ecuador". -The creation of a Research Group "Information and Communication Technology and Geospatial Information" at the National University of Patagonia San Juan Bosco, Argentina. The paper concludes with the contributions, results and reflections achieved to date through the described initiatives and proposes courses of action for the future.

New ways of teaching geomatics: online delivery (128)
Brigitte Leblon, Armand LaRocque, María Luz Gil

Spatial Data Infrastructure Capacity-building for stakeholders around Ecologically Sensitive Areas of Western Ghats, India (102)

The Western Ghats is one of the global biodiversity hot-spots declared in 1998. It has a number of protected areas including 2 biosphere reserves, 14 national parks, several wild life sanctuaries and many regions declared as Reserve Forests. A large number of plants, amphibians, birds, reptiles and mammals are endemic to this region. In recent times, many of the ecosystems in the Western Ghats are in danger due to anthropogenic pressures such as mining, hydro projects and other development projects. Unfortunately, there is very large gap of location specific information and geospatial data for a number of stakeholders including governmental and non-governmental agencies. To fill this gap, we worked on building a Spatial Data Infrastructure (SDI) with readily accessible and useful Free and Open Source Geospatial tools for the region and for the stake holders involved in conservation and well-being of this global hot-spot of biodiversity. A key component of this project was to increase SDI awareness. Four capacity building workshops were conducted respectively at Indian Institute of Science (IISC), Bangalore, Kerala Institute for Local Administration (KILA), Thrissur, Foundation for Ecological Research, Advocacy and Learning (FERAL), Pondicherry and Bharati Vidyapeeth, Pune to achieve the goal of SDI awareness which in turn help to create a common framework for the exchange of primary data and research results between partner organizations, so as to add value to information for improving policy development, planning, development control and decision-making. The capacity building program was conducted by the OSGeo-India (http://osgeo.in) chapter involved in promoting Free and Open Source Software (FOSS) tools in India with the support of small grants provided by Global Spatial Data Infrastructure (GSDI). The impression of participants showed that they understood the importance of FOSS tools application in the mechanisms for standardizing spatial data, ensuring its quality, sharing data, accessing information, metadata, and web services concepts along with a regional language approach for conservation.

Does SDI need a gender dimension? (31)
Nancy Aguirre

During past GSDI conferences an apparent male predominance on key notes, presentations, discussion sessions, and workshops comprised by these events was noted. Does SDI need a gender dimension? Is SDI gender-blind? What are the perceived roles of women actively participating in this community? How do women benefit from SDI initiatives? Roughly 50% of the global population is comprised by women with geographically-dissimilar educational and professional experiences particularly in the interface of science, engineering and technology. However, SDI main prompting objectives have included concerns/issues linked to the environment, development, territorial ordering, and disaster management, among many others; but it is less common finding the pressing problems affecting women worldwide underpinning these ventures. These relations with SDI developments have not been sufficiently assessed. For instance, closing the gender gap is central to agricultural development and food security; and most, if not all, of the Millennium Development Goals are explicitly gender-oriented. This paper contributes to answering the above questions. It draws from the growing scholarly work on women in science and technology, but focuses on findings obtained from interviews with open-ended questions, and a survey on SDI-gender dimensions conducted both to leading organizations and individuals worldwide. Mixed qualitative and quantitative methods are used. Results show that a smaller proportion of women than men are engaged on SDI initiatives in diverse roles, and depict a differential emphasis of goals supporting SDI initiatives globally, with less emphasis on gender-related questions. Hence, recommendations for furthering research on SDI-gender dimensions are proposed.

A Review of the Geosciences and Geospatial Educational System in India: Needs and Perspectives in imparting need-based Training and Education for Better Land Use and Natural Resource Management (104)
Mahender Kotha

The recent developments in the world economy have had major influence on the trends in education in general and geosciences education in particular. India’s growing demand in better utilization of Land Use and natural resources requires the reliable spatial information collection, integration, management and sharing, and the associated relevant education, experience sharing and development of best practices. The focus is now heavily on key areas which have direct link with human environment including the exploration and exploitation of new mineral resources, sustainable development and better management of the natural resources, environmental awareness, disaster management and so on.
Traditionally, the geoscientists are continuously challenged by the need to discover and develop natural resources of metallic ores, industrial minerals, fossil fuels, construction materials and groundwater. With the ever increased depletion of natural resources, environmental degradation, ecosystem devastation and natural disasters, the responsibility of geoscientists today is enhanced further to better manage & conserve and use natural resources and landuse and to better predict and & manage the natural disasters. The knowledge of the number of geo-professionals and their distribution, both in terms of specialization and affiliation, is currently very poor globally to tackle these issues. There is even greater uncertainty with respect to predicting the number of new geoscience graduates and post graduates. Increasingly, geoscience-related societal problems are recognized as grand challenges - excellent topics for research efforts that will facilitate, in developing solutions, the collaborations outside of science, in the fields of public policy, business, and beyond. To deal with such and many other related issues of the present day, the geoscientists in making require a solid, broad based foundation in geosciences with a thorough understanding of specialization and the ability to handle large amount of geo-information
The educational systems have long been under stress at all levels, but towards reform – especially of science education in general and geosciences education in particular – is gathering momentum. While earth observation systems from space and aerial platforms offer a variety of data in spatial domain, techniques of geographic information systems provide tools for varied analysis. The use data from the state-of-the art Indian remote sensing satellite for the generation spatial databases on various themes such as land use/cover, soils, wastelands, wetlands, hydrogeomrophology, coastal landforms etc., although has been in practice in selected government organizations and also some commercial firms but for the better use of the data in creation of specialized spatial databases for the use of variety of themes requires the fundamental knowledge and which should become a part of the curriculum at least at graduate or post-graduate level.
To ensure the propagation of skills in understanding and managing such issues, it is very essential to bring in some drastic changes to the geosciences educational systems at various levels. A brief review of the current geosciences/geospatial educational curricula clearly points out to certain specific lacunae which are very essential to rectify to build a strong geosciences professional career that ensure durable global competency in geosciences for the future which in turn enable to promote sustainable economic development locally and as well as globally.
One of the foremost activities in the reform process is introduction of balanced curriculum with an open learning approach to suit the need of both student who is learning and the institute that is offering the course without forgetting the requirements of society/industry for better management of natural resources. The trend in geospatial education is moving towards a holistic approach, which not just focuses on profit making but keeps in mind the needs of the society and utilizes the best of the technologies in their teaching and learning process. The currently offered curricula at various levels at most of the institutes is mainly of academic nature and lacks the contents related to real world issues and hence the students come out of university with very little knowledge of the real world issues that they are going face. It is very essential now to introduce Geoscience/Geospatial awareness programs at graduate level, updation of curricula to include more applied subjects.
As the education in the geosciences/Geospatial Technologies is multifaceted and includes a broad spectrum of activities exposing a wide range of students to scientific principles and practices through discovery- and inquiry-based learning, the introduction of multimedia-based teaching/learning process is quite appropriate. Adopting the ‘bottom-up approach’ the present paper while focusing mainly on the review the general geosciences/Geospatial educational systems with special reference to curricula of Indian Universities in comparison with that of some developed nations, the paper also suggest a possible structure of new flexible, module based curriculum for natural resource management that can be offered at graduate or post-graduate level particularly suited for the developed/developing nations endowed with vast natural resources.

Parallel Session 9.5 (Room 2104A)
Panel on Supporting Global Geospatial Collaboration
Moderator: Nick Chrisman, Director of GEOIDE

Participants/Panelists:
GEOSS Overview and Latest Developments, tba
The Global Spatial Network: Building International Research Collaborations on a Network to Network Level, Peter Woodgate, Chief Executive Officer, CRCSI Australia
UN Global Geographic Information Management Program, tba
Geographic Information Knowledge Network, Harlan Onsrud, Executive Director, GSDI Association

Parallel Session 9.6 (Room 205C)
3DGeoInfo: Data Infrastructure + Interoperability

Keynote Speaker
Richard Mongeau

A Three Step Procedure for Enriching Augmented Reality games with CityGML 3D Semantic Modeling
Alborz Zamyadi, Jacynthe Pouliot and Yvan Bédard

3D representations are recognized as an essential component of Augmented Reality (AR) oriented applications. However, not many examples of AR-oriented applications employ structured 3D data models despite the existence of standard 3D information models like CityGML. One of the reasons for this shortcoming can be explained by lack of a step by step approach for enriching AR-oriented data models with 3D features. Therefore, a three step procedure is proposed to address this limitation such (1) back-ward engineering of an AR-oriented application to its current data model, (2) enriching the current data model with 3D representation factures, (3) a mapping the enriched model to a standard 3D information model. A notable contribution of this work is that the procedure of data modeling has been subject to the UModelAR meta-model which has brought a complementary standpoint to 3D geospatial modeling in the adoption AR environments. Furthermore, the enriched data model has been mapped to CityGML information model with CityGML Application Domain Extension (ADE) concept. To demonstrate the feasibility of this approach, an operating mobile AR-oriented game has been used for the case study.

Implementation of a National 3D Standard: Case of The Netherlands
Jantien Stoter, Jakob Beetz, Hugo Ledoux, Marcel Reuvers, Rick Klooster, Paul Janssen, Friso Penniga and Sisi Zlatanova

This paper presents the follow-up activities of the 3D Pilot NL, which is a large collaboration in the Netherlands aiming at pushing 3D developments in the Netherlands. The first phase resulted in a national 3D standard. Some insights obtained during this phase are sufficiently mature to be anchored in practice such as maintaining and further developing the 3D standard by Geonovum and the provision of a countrywide 3D midscale base dataset which is currently under study at the Kadaster. Other results need further attention in a collaborative setting, specifically how the new 3D standard works in practice. This is currently being further explored in a second phase of the 3D Pilot in which 85 organizations (160 persons) are participating. The goal of the follow-up pilot is more focused than the first pilot and aims at writing best practice documents by joint effort of the 3D Pilot community. The best practice documents are based on tools and techniques that are being developed for supporting the implementation of the 3D standard. Specific attention is being paid how to align CityGML to the standard in the BIM (Building information Model) domain (IFC).

3D GeoInfo Special Sessions

Parallel Session D.6 (Room 205C)
3DGeoInfo: 3D City Models Infrastructure

Semantic 3D Modeling of Multi-Utility Networks in Cities for Analysis and 3D Visualization
Thomas Becker, Claus Nagel and Thomas H. Kolbe

Precise and comprehensive knowledge about 3D urban space, critical infrastructures, and belowground features is required for simulation and analysis in the fields of urban and environmental planning, city administration, and disaster management. To enable such kind of application, geoinformation about functional, semantic, and topographic aspects of urban features, their mutual dependencies and their interrelations are needed. Substantial work has been done in the model-ing and representation of aboveground features in the context of 3D city and building models. Standardized models such as CityGML and IFC however lack a rich information model for multiple and different underground structures. In con-trast, existing utility network models are commonly tailored to a specific type of commodity, dedicated to serve as as-built documentation and thus are not suitable for the integrated representation of multiple and different utility infrastructures. Moreover, the mutual relations between networks as well as embedding into 3D urban space are not supported. The Utility Network ADE of CityGML as pro-posed in 2011 provides the required concepts and classes for the integration of multi-utility networks into the 3D urban environment. While the core model co-vers only the topological and topographic representation of network entities, the functional and semantic classification of network objects is now introduced in this paper. This paper will show how concepts and classes can be defined to fulfill the requirements of complex analyses and simulation, and how properties of specific networks can be defined with respect to 3D topography but also network connec-tivity and functional aspects.

Integrating Scale and Space in 3D City Models
Jantien Stoter, Hugo Ledoux, Martijn Meijers and Ken Arroyo Ohori

The different levels of detail (LODs) of current 3D city models coexist and are not explicitly linked. This makes the storage, maintenance and analysis of these models not optimal. It is particularly difficult to query through different LODs and to keep different LODs consistent after updating. The extended abstract details this problem, it describes our approach to solve it and explains the benefits of integrating scale and space in 3D data modelling.

Visualization of 3D Building Models in CityGML
Siddique Ullah Baig and Alias Abdul-Rahman

Generally, cities are expanding due to rapid population growth and require 3D city models for effective town planning, communication and disaster management. Rendering of 3D scenes directly is not so much appropriate as its appearance properties, textures and materials drastically increase the loading time for visualization and spatial analysis. Additionally, different applications or users demand different LoDs (Level of Detail), thus one of the questions arises - how different LoDs can be made available to these applications? Generation of lower LoDs given by OGC standard CityGML from higher LoDs to reduce data volume is a generalization problem. Relying only on existing geometric-based generaliza-tion approaches can result in the elimination or merging of important features, hence, need to be avoided. CityGML offers both geometric and semantic proper-ties to be taken into account while performing generalization operations. A review of generalization algorithms proposed by several researchers is presented. A framework for automatic generalization and visualization of 3D building models is proposed in this paper which covers all the geometric and semantic-based generalization methods. Initially, XML-based CityGML file is parsed and stored in C++ class objects. The resulting objects contained both geometry and semantic information from the input CityGML dataset. Ground plans are generated and simplified as part of generalization process. An adoption of algorithm of [Mao et al, 2010] is applied for generation of ground plans whereas an adoption of me-thods of [Sester et al, 2004] extended by [Fan et al, 2009] are applied for simplifi-cation of generated plans. Results of the generalization processes are visualized by using libraries of OpenGL. The experiments showed that due to repetition of coordinates of connected nodes in CityGML increase both the rendering time and memory space. However, elimination of important smaller features can be avoided by taking semantic information into account while performing generalization operations.

Modeling an Application Domain Extension of CityGML in UML
Linda Van Den Brink, Jantien Stoter and Sisi Zlatanova

Recently a national 3D standard has been established in the Netherlands as a CityGML Application Domain Extension (called IMGeo). In line with the Dutch practice of modeling geo-information in UML, the ADE is modeled using UML class diagrams. However the OGC CityGML specifications do not provide rules or guidance on correctly modeling an ADE in UML. Based on the lessons learnt from developing the CityGML-IMGeo ADE, this paper describes how CityGML can be extended for specific applications starting from the UML diagrams. Several alternatives for modeling ADEs in UML are introduced and compared. The optimal alternative is selected and applied to obtain the national 3D standard. Open issues are described in the conclusions.

Parallel Session E.6 (Room 205C)
3DGeoInfo: Collaborative + Crowdsourcing

Exploring Cultural Heritage Resources in a 3D Collaborative Environment
Arantza Respaldiza, Monica Wachowicz and Antonio Vázquez Hoehne

In addition to research focused on building environments to support collaborative work with cultural heritage information, attention is beginning to be directed to human aspects of collaboration asynchronous at a distance. A starting point for supporting different-place geocollaboration is provided by the development of web and technologies from distributed databases, tools together with a Spatial Data Infrastructure. This work has considered both metadata and interface issues for serving the cultural heritage information through the web, mainly those concerned with how to visually represent the metadata to users. Currently we can count few applications of 3D virtual reconstructions in cultural heritage and in computer graphics. The reconstruction is in the capacity of showing the spatial-temporal, semantic, symbolic and interpretative relations between the model, the final result, and the interpretation process. The aim of this research is to experiment a multi-user domain on the web aimed to a multidisciplinary scientific community: historians, archaeologists, experts in human and social sciences, communication experts. These developments were reviewed for Forte (2000, 2003, Forte et al. 1997, 2006, 2009). Different hypotheses corresponding to the “possible realities” can coexist, showing the reconstruction of the past. All cultural heritage information converge in a virtual scenario on the web where the scientific community can meet and interact in real time, exchange and test hypothesis, share data and simulate different scenarios in order to discuss possible interpretations and methods. The envisaged virtual space will be an editable and dynamic environment in continuous evolution and able to be updated with new information. Therefore, the aim of this paper is to demonstrate how the complexity of cultural heritage resources can be dealt with by a visual exploration of their metadata within a 3D collaborative environment. Towards this end, a metadata visualisation approach is proposed for creating a formal structure for an implicit and explicit representation of the connections and voids between different current domain-specific standards. The case study is Risk Map for Tossa de Mar (Girona, Spain) was used for the implementation. The Risk Map characterizes the presence and territorial diffusion of the historic, cultural and environmental heritage and values its vulnerability. Furthermore the Risk Map observes, describes and values dangerous levels present in the territory and pertinent static-structural, ambient-air and anthropic dimensions.

OpenBuildingModels – Towards a Platform for Crowdsourcing Virtual 3D Cities
Matthias Uden and Alexander Zipf

Within the last years, the idea of volunteered geographic information (VGI) has developed rapidly and changed the world of GIScience. Most prominently, the OpenStreetMap project is on its way to map our world in a detail never seen before. Particularly within urban areas, the interest in the community shifts from only streets towards buildings and further objects of the environment such as parks or street furniture. However, it mainly deals with 2D content so far. In order to come closer to the Digital Earth, it needs to be discussed, how the 3D aspect can be integrated into VGI-projects. In this article, the current situation is reviewed and crucial issues for future development are pointed out. Furthermore, a first prototype of OpenBuildingModels is presented. This web-based platform allows to interactively upload architectural 3D building models and connect them to the OpenStreetMap database. These models will be integrated into an OSM 3D viewer in the future and therefore greatly enhance user-generated 3D city models.

Crowdsourcing of Building Interior Models
Julian Rosser, Jeremy Morley and Mike Jackson

Indoor spatial data forms an important foundation to many ubiquitous computing applications. It gives context to users operating location-based applications, provides an important source of documentation of buildings and can be of value to computer systems where an understanding of environment is required. Unlike external geographic spaces, no centralised body or agency is charged with collecting or maintaining such information. We take the position that models of building interiors can be volunteered by users of these spaces. The widespread deployment of mobile devices provides a potential tool that would allow rapid model capture and update. Here we introduce some of the issues involved in volunteering building interior data together with a preliminary method for capture of this information. The nature of indoor data is inherently private; however these issues and legal considerations are not discussed in detail here.

Closing Session
Jacynthe Pouliot /Umit Isikdag