The Risks of Spatial

A couple things have occurred recently that have brought the topic of risk to mind. One was an interesting chat on twitter about converting an enterprise installation to an open-source basis, and a discussion I had personally related to the development of the GIS at the city, and any contingency plans. Let’s start with a definition:

Let's start with a definition:

Risk – noun

  1. The possibility of suffering harm or loss; danger.
  2. A factor, thing, element, or course involving uncertain danger; a hazard

Just looking at both definitions, I can see a few categories of risks you may encounter related to GIS. Let’s look at a few of them, some examples in each, and then some thoughts on how to mitigate them.
Here are the topics we will cover:

  1. Program related – These risks could be related to staff, budget, program capabilities, performance, etc.
  2. Technology or data related – These risks are related to the technology backing the GIS, the data, software, or apps used for analysis and visualization.
  3. Operational – These are process related risks and can occur within the program, between the program and other departments, or with technology integrations that are not wholly controlled by GIS/IT staff.
  4. Tying the Bow – Recap and final thoughts.

Program Risks

These risks are related to the GIS as a whole, in whatever configuration that may be, from a single person, to a whole department, and all that may encompass. Here are a few risks that could arise with a GIS program:

Loss of Key staff and institutional knowledge:

If one or more people leave, like a manager, or long-term staff, a lot of knowledge about a system is going leave with them. If this information is not documented, then whoever is left, could be in a precarious position that could take a lot of time to recover from.


For any program, you need to know the basic structure of staff, technology and data. You also need to know how to get into everything. It is critical that you have some location designated where you place a regularly updated document that breaks down the structure of the department, all key locations of technology, data and applications, and most importantly, the access credentials to get in. There is nothing worse than ending up locked out of a system because no one thought about it. This item was brought up to me in a recent discussion about the program that I have put together. I was asked what would happen if I theoretically were hit by a bus, or something catastrophic happened. I’ll admit that I was caught pretty flat-footed, and am currently working to change this. This article is an outgrowth of that as it occurred to me that others may face a similar situation, and identifying some of these risks and how to mitigate them, may be helpful.
Along with a structural document, you need to have an up to date data dictionary and metadata. This should list all the data layers in your database, as well as information about the source and updating source, frequency, etc., for each layer.
One of the last items is a list of any connections and integrations with other city software. These could be finance, planning or asset management related. Of particular importance here, are the appropriate points of contact for each element on this list. Someone coming in new or another non-GIS staff trying to step in, isn’t going to know where anything is, or what is available, so having this for continuity is extremely important.

Budget Reduction:

This is never a fun eventuality to consider, but recent events with COVID19 should make it clear that some serious financial revisions could be in the offing. Of course there is a huge range that this could cover, from the loss of an entire department, to some percentage cut due to anticipated revenue decrease. Since we are talking about risks, let’s go all the way down to the individual level. If you, as a manager, see the storm clouds brewing, how should you prepare? You need to personally prepare for the worst, as strange things can happen when finances enter the mix. At the very least, you best have your resume up to date. Beyond that, it is all individual, and depends on your levels of both comfort and satisfaction in your current position. Perhaps you want to put feelers out to see if it is worth leaving sooner, or you just want to ride it out. Yes, I know that some people will say this is anathema to managing risk for a GIS program at a city, but in the end, the city will go on, and the operations will go on, whether you are there or not. This is not to say you should leave the program unprepared, and we will go into that next, but make sure you are looking out for your career.


  • Cross-train everyone: If you have more than a 1 or 2 person department, you are going to have some degree of specialization, it just happens. What you don’t want though, is to have everyone so into their niche that they know nothing about the system overall, or how the different components work together. This doesn’t have to be in-depth knowledge, but enough that they could come in one day and begin to troubleshoot an issue, or restart an app that happened to go down. This is really critical if you are a smaller team, as one person leaving may mean a substantial amount of institutional knowledge walking out the door. So, make sure everyone knows what is on the various servers, if you have more than one. They should know how to get in to the main database, and make changes if necessary. The same can be said for all the apps and integrations comprising the system. Perhaps not a working knowledge, but at least a strong familiarity will help prevent a lock-down should one or more people have to leave unexpectedly.
  • Keep it lean: I know that overstaffing is not generally an issue with GIS departments. That being said, make sure the staff you do have, are fully utilized. At the same time, make sure that you are keeping your overhead to a minimum. Know what your actual budgetary needs are, and don’t throw in too much extra. Cover the basics: software costs, hardware upgrades, maintenance, training, etc, but don’t let it get out of hand. If you are putting in extra for a consultant to come in every year, you best be able to justify the need for them, what they are helping accomplish, what value they are providing, and how they make you better. The more vague line items that don’t seem tied to a specific task or process, the more likely someone is to take a hard look at your budget if a crisis comes around. If you have items in the budget for things like training or conference attendance, make sure you use them. This will show two things. 1. You are not putting items in the budget just to have them and have extra money in your department. 2. You care about expanding the capabilities of you and your staff. This is important when things get tight to show that your staff are invaluable resources who are actively increasing their skill set. This makes them even more valuable as that additional training is far more difficult to replace with a new hire should they have to be let go due to budget. It also helps with the next potential risk to the program.

Loss of confidence by senior management:

This is pretty self-explanatory, but here’s a bit more detail. If you sense that the GIS program is not getting the same attention from senior management, or there is a lack of use of apps or data by those staff, you may be running into this issue. The problem with this happening is that it may then trigger the other risks discussed above to become realities. How could this happen, you ask? It really boils down to a lack of communication, either of needs or capabilities, or both. If management isn’t aware of what the program can do, what data and/or applications are available to assist others in performing their job function, then two things can happen. 1. They start to forget about it. 2. They start to wonder what is the point of having a GIS, especially if it is a budget heavy item, as can happen.


The first step to fixing this, as mentioned above, boils down to communication. You need to be working with management to not only ensure they know about the program, but to make sure you are helping address their frustrations and their goals. It could be anything from some information that they are trying to find, but can’t, to a particular questions that keeps coming up, that may be easily answered with a spatial component. On the other hand, every manager has goals or a vision for the organization. Whatever your ideas about the GIS, ultimately, you work for and need to support the organization. If there is a mission statement, what are you doing to support that statement. If there are pet projects that senior management is interested in, how can GIS be utilized. Whatever these items happen to be, you need to be actively searching them out, and presenting potential solutions. At the same time, it is important to show the general capabilities of the GIS, which support the organization as a whole. If you can show that you are supporting the rest of the organization in a quantifiable way, it will show that the GIS department is a crucial area that needs to be maintained.
I will be the first to admit that this area can be challenging, as you have to get away from your computer, and go interact with others. It probably shouldn’t need to be stated that this is important not only for the health of the GIS program, but also your own personal development and career. I know for me, sticking to my computer and immersing myself in data, is my happy place. Some structural changes at work pushed me to become more involved in some citywide coordination efforts, and it has been extremely positive for both me, and the GIS program. Don’t get caught in my same trap. Get out there. You owe it to yourself, and more importantly, to your staff, if you have them.
If you’ve read the above, you may be wondering how to do this. Again, you have to reach out. Management tends to be very busy managing, and aren’t going to do it. If you want to get their attention, do some research about a problem they are having, and get the solution done on your own, then set up a meeting to present it. That can often lead into a larger discussion of issues and goals, which you could then follow up on for a longer-term strategy.

System breach:

If a trojan or some other breach occurs, this could seriously affect your program through loss of hardware, data, or both. If there is a situation where it is a ransom related trojan, you could be in a situation of having to rebuild your entire system, hopefully from backup, but not guaranteed. This is going to take significant staff resources, time and possibly cost for specialized assistance. This of course may cause delays in providing services and/or meeting deadlines.


  1. Have recovery processes: Yes, the first step to recovering from something like this, is to have planned for something like this to happen. I would even suggest spending some time a couple times a year to run a test scenario where you have to recover your data and applications. See how your staff responds and make changes to the processes as necessary. Part of this process should be documentation of the system as well as placement of critical information in a secure yet accessible area.
  2. Plan for prevention: This should take the form of training for all staff using GIS applications, data and software. A lot of the basic security rules should be coming from IT, but there are a lot of basics that can be taught at any level. These are things like password security, recognizing phishing scams, and other email scams. The third step is to again emphasize cross training of staff to different areas of the GIS. This may even go so far as bringing in staff from other departments to have this sort of training. It definitely should include IT staff. There should be knowledge of where critical data and applications are located, along with the backup and restore processes.

Ideally, addressing all of these risks will tie together, and provide you with a comprehensive management plan for the GIS program, its maintenance, strategies, and goals.

Technology and data related

This section will cover all the things that could happen with your technology stack, your data, or your presentation methods. The risks largely fall into the groups below. You will see the budget is repeated from the first section. This is because budget issues can affect your tech stack differently or more directly than the GIS program as a whole.

Data Loss

Any time you are dealing with technology, servers, databases, software, etc, you are at risk of losing data. Things crash, sometimes for no reason, hardware fails, networks go down, and so on and so forth. You have to prepare for that eventuality since, as has been said time and again, it isn’t if it will happen, but when it will happen. Let’s look at the tree of disaster, and how to fix it:

Network failure:

If your network goes down, or your internet connection goes down, the whole system may drag to a standstill. If you can’t access your servers, then you are stuck until the issue is resolved. The network portion may be easier, depending on what happened.


One solution is to have much of your data mirrored into some form of cloud based storage, potentially at an off-site location. This would allow people to still access data through the internet, even if some portions of the network are down. This also gives you flexibility since you choose what goes there. You may choose to have particular datasets located in the cloud, either stand-alone or synced from your local data. You could serve all of your web-based applications from this location. Maybe you want this location to serve as an off-site mirror of your data. These are all options that can provide access to data if your internal network has issues. At the same time you have this flexibility, this can be more efficient from a maintenance standpoint if you use a service that handles the server and system software level issues, so you only have to deal with data related software. Cost becomes a factor because you will at some level, only pay for what you are using. Given how cheap storage is, and even computing, this can be significantly cheaper than maintaining the same data and applications in-house.

Server failure:

If your server goes down, you may lose some or all of your data. These never go down at an opportune time, and generally the less convenient, the worse the failure is. You may consider that a scientific fact!


The first, and easiest solution is possible because of the advances in tech that we have seen the last 3-5 years. There has been a shift in terminology, to where the server refers to the physical box, and the actual system you log into is a virtual machine hosted on the server. You may have one or many of these with expandable resources as necessary. The beauty of this is that if the crash happens at the level of the virtual machine, it is often easy to simply restart it and you are back running again. You may also simply have another instance created as a failover. If your main GIS server goes down, everything switches over to the secondary one. That sounds great, but of course GIS makes things complicated with specific file paths and everything like that. This is not insurmountable, it just takes some work to configure and get set up properly.

Database failure:

This problem is similar to the above in that the database going down may cause data loss, depending on the circumstances. This could specifically happen if processes that modify or update tables were running at the time of the crash.


A crash of this sort may be recovered from by following the procedures to maintain database logs which record the actions of the database. These logs may allow the database to be rebuilt from its last saved state. They can allow you to rerun the operations up to what was last completed. This of course, is just getting the database back up, you still need to have a solution while it is down. The needs here really depend on how your GIS is structured. If you have many people or applications connecting directly to the database, you will need some sort of failover set up, where another database, possibly a mirror, can simply take over. This functionality is definitely included in PostgreSQL, and is likely in the other major RDBMS as well. It will be critical to minimize downtime of staff and applications. Another time this is critical is if data is being written to the database by an application, like a web form for instance. On the other hand, if your GIS is structured in such a way that the database connections are passive, ie, they run on a set schedule to update data in other applications, or other locations, it may be easier to deal with a database failure. In this case, you may simply delay the next update if it is set to take place relatively soon. A case like this may make it easier to restore the database as well, if it won’t start back up using the existing data. If you are passively updating other applications, and no data is written directly to the database, you may simply choose to restore from the latest data backup on your server. This is most efficient if you know you haven’t made any changes since that time, or are comfortable with those changes. Now that we’ve talked about the database, let’s move on to other types of software failures.

Software failure:

I am treating this separately from the database software because I feel that is more integral to the server while other software is more individual. When these types of software failures occur, it is really all about saving. How often did you save your edits? How often is your data backed up? Were you working off the production copy of your data? You need to have a plan for handling software failures, because inevitably, your software is going to crash, or you are going to find a bug.


It is important to note in this case, that these two cases may have different recovery needs. In the case of a software crash, it may be related to a particular project or hardware you are using. Many times, simply restarting or restarting the computer is going to be enough to pass through whatever condition caused the crash. In this case, you need to ensure you have been saving often, both with whatever project you were working on, and also for any data that was being created or edited. In the case of finding a bug in your software, it generally reveals itself through a crash, with the difference being that it happens again and again if you restart and get back to the place where the crash occurred. In this situation, you need to be able to record the circumstances of the crash to either submit a bug report or open a support ticket if possible. Once you do that, you need to decide if you can wait for a response or resolution, or if you need to find an immediate solution to get around the problem and continue on. Nine times out of ten, the latter is going to be the choice. Crashes and bugs never happen at a convenient time. They always happen in the middle of a project, or when you are a deadline, so of course, you need to get back to work as soon as you can. Your options in this case are to either work on a different portion of your project, or employ a different method to perform whatever task that uncovered the bug. This may include using different software, or a different process to accomplish that task, whether in the same software or others.

Technology Obsolescence

Any technology that we use is becoming obsolete almost from the first use. Software is continually being developed. Whether that represents improvement or not, is an open question, but that is a discussion for another time. Hardware continues to increase in speed and capacity. New hardware technologies are being developed to improve connectivity, workflows, communication, etc. With all of this there is a risk of falling behind. Perhaps I should clarify that this is a perceived risk. If you have purchased software and hardware, and it is installed and working to fulfill your business needs, your actual risk is fairly small. The amount of risk you have depends to a large degree on the degree of connectedness of your systems. If the GIS at your organization is tightly integrated into other systems, you are going to have to ensure that this integration is maintained as each component receives upgrades. This brings with it risks on both the hardware and software sides of the equation. Let’s look at each:


When your GIS has strong ties to other software platforms, eventually upgrades will lead some components being out of date, or incapable of handling the loads the software is placing upon them. A few places where this occurs are with storage capacity, bandwidth and load management. Storage is probably the easiest to address as there are many solutions for large amounts of additional storage. When you are dealing with a server, you can’t just take storage in a vacuum though. You have to look at it in the context of the rest of the server hardware, and the ability to transfer data across the network, and the speed of access of the server processor to the stored data.


Come up with a short and long-term plan for software, and then map out the hardware required to support. This will help you see the gaps in hardware as the software is updated, and see where there are going to be processing and or bandwidth bottlenecks. Once you see these, it will be easier to plan upgrades to these components, and at the same time, plan for additional storage. As mentioned above, the increase in the use of virtual servers gives a lot more flexibility for handling server needs. Instead of purchasing a number of servers to handle different needs, you may instead purchase a much larger single appliance with a large amount of processing power and storage, then spin up as many virtual servers as necessary to support your software. This allows for more efficient resource allocation in both processing and storage.


As with any other software, the database that you are using is hopefully going to be improving and adding features as time goes along. If you have a tightly integrated system, you are going to need to ensure that your database maintains any necessary compatibility with other systems. Over time, it is entirely possible that one or more of the systems you use or integrate with, will reach a point where it has a major upgrade to continue, or is no longer going to be supported. In either of these two scenarios, a major software upgrade, or an obsolete system, moving to a new system is likely going to require a change to your database.


The software you are using should be on a regular update cycle. The most common modern GIS packages are under active development which provides regular updates with bug fixes and new features.


Two things that will help mitigate this sort of situation are keeping all of your software and database software current, and using software which supports data standards, and a database which adheres to the SQL standard.

  1. Software and database updates: If you are on a regular testing and update process with your database and software, you will probably not run into a situation where something gets completely obsolete. You should be able to keep abreast of upcoming changes to your database and test to see how they will impact your integrated software. You will then have to make the determination of whether to switch to the new database version or hold off due to an issue with your other software. The goal here will be to remedy any software issues to be able to maintain your database update cycle. These potential software interactions highlight the reason to organize and store your data in standards-based software.
  2. Software and database standards: If your software adheres to a set of standards, and your database software does as well, then there will be higher likelihood that they will work together successfully over a longer period of time. This is because as the software or database are upgraded, they still must maintain their adherence to the standards, ensuring they will work with other packages which use the same standards. At the same time, if you have a database which strongly adheres to the SQL standard, then if something happens which forces you to change your database software, it will be much more likely you can easily transfer your data to a new database, and have the new database integrate with your other software.


While the budget will impact a department overall, even smaller budget cuts can have a significant impact on the technology you are able to employ. A budget cut in a particular year may make the difference between an upgraded server or keeping an older one, or moving to a newer database platform and thus newer integration options. As with any budgeting, a single year where there is some fluctuation may not cause a problem. Where it becomes more serious is in the case of a potential recession where you may see your organization’s revenue being affected for multiple years. This can cause cumulative effects to occur in all three technology aspects. The other part of the budget to be concerned about is unanticipated cost changes. Here are what I see being the biggest issues, and some mitigation options:

  • Hardware: The biggest budget issue related to hardware is if you have a significant end of life issue coming up. This could be either with user workstations or with servers. There are a couple big problems:
    1. Performance issues: If it has been a while since you upgraded, you may start seeing issues with users trying to do their work, and or with more frequent server issues related to lack of memory or processor capability. This can be pretty serious as it puts more pressure on other components, especially if you have unexpected crashes. Mitigation: I think this situation is trickier to handle as the root issue is that you need new hardware, but you can’t swing that in the budget. The long-term play is to have an overall budgeting solution where you build a replacement fund. This is discussed more below. This may run into the hard wall of reality if you have larger budget issues for a couple of years that cause the hardware upgrades to be pushed out. This is where you may start to see performance issues and need a short term solution. There are a couple of options.
    • Purchase server space: Perhaps on Amazon AWS or Microsoft Azure Cloud or something similar. These give you the benefit of sizing a server to match your needs. You set up security such that it allows secure connections to your other applications and hardware, load the data, and get started. The benefits are that you have a very scalable system, that is pretty cost competitive, especially in the short term. Of course, there is always a downside, and in this case, it means more configuration. You have to set this up, get data to it, and make sure it is going to work with all the other applications that you may have connected.
      • As I sit here, another option comes to mind. If you are in a situation where you have a single server acting as a primary data source, with multiple applications and/or staff that connect, performance is to a degree going to depend on the number of people connecting to the database. If you have your existing server and maybe another one that is not being fully utilized, you might consider replicating your database between two servers. Have one set up for staff or user connections. Have the other set up for application and processing work. This would split the load and potentially relieve a resource bottleneck. This is definitely possible at the database level with PostgreSQL, and is likely an option with other RDBMS as well. There may be licensing issues depending on what you are running, as some of those are tied to server processor cores, etc. If you are on an Esri ELA, it may not be as much of an issue though I don’t know the constraints. In the end, if it is a matter of licensing for a couple of years, vs a big outlay for server hardware that isn’t available, this may be less of a stretch.
    1. Upgrade restrictions: If your hardware is too out of spec, you may be precluded from updating some of your software. If you are on an upgrade path that is leading to new versions of a bunch of software, but it is going to require new hardware, then a budget cut at this point is going to be a serious slow-down.
    Mitigation: Build a replacement fund. Instead of simply allocating budget in a particular year to do a replacement, allocate funds in multiple budget years and divert them to a fund. Let the fund build up year over year until it is large enough to pay for the hardware replacement. This will insulate you from a budget cut in the year you were supposed to do a hardware replacement because the money is already set aside and saved from future years.
  • Database: The biggest part of a budget in many cases is software licensing. This relates as much to the database as it does to the desktop software that staff are using. Depending on the database and GIS software you are using, there could be potentially 2 levels of licensing to be aware of. The first is for the database itself. This could be for MS SQL Server, Oracle, or a couple of others. The second is for the spatial abstraction layer that runs on top of the database. For Esri, this would be ArcServer, or whatever geodatabase product you are running. In time of budget cuts, the view always focuses on the largest cost as those will provide the most relief. Let’s look at each of these parts to see what could happen:
    1. Database licensing: This seems to cover 2 different parts, the sizing of the database, and the server infrastructure, ie, processor cores, it is running on. Budget control is a two way street. You have to look at what your budget is now, and what it may be, as well as what your licensing fees are, and what they could be. Whether you are looking at a budget crunch caused by a revenue shortfall, or due to increases in licensing costs, your risks are essentially the same. If you look to cut down the licensing in some way, you are going to deal with a combination of reduced data capacity, and/or reduced performance. This is something that I don’t feel is likely to happen, as server software and database licensing tends to be considered critical, and thus must be maintained. On the other hand, licensing can get expensive. If you have taken the time to put together a backup RDBMS in something like PostgreSQL as discussed below, question may be asked about why not make that the primary, and duplicate it as a backup, and eliminate the licensing altogether. You should be prepared to defend your choice regardless of which option you are leaning toward. Mitigation:
      • Make sure you are keeping your database as lean as possible. Don’t keep unnecessary data that is not actively updated or frequently accessed. Ensure that any data in your database is constrained to your jurisdiction or project area. Keep reference data in a file-based storage format. See my post on GIS for Small Cities – Data Storage. If you know you have a lean database, it should be easier to justify maintaining your licensing at the current level. Just be aware that you may be constrained for future expansion if you are reaching the limits of the license.
      • The other way is to have some form of backup of your database that is located in an open-source RDBMS like PostgreSQL. This will take any hassle off licensing and allow you to use as much space and/or server resources as necessary. If you are using this as a backup, you can ensure that any connections you have to your production database from users or other applications can be replicated with this backup. If a situation arose where your database license was going to be cut, you simply switch to your database in PostgreSQL, make it the production DB, and never look back.
    2. Spatial Data software licensing: This is going to be your ArcServer or Oracle Spatial licensing, or something else. As with the database software itself, licensing tends to be based on the physical aspects of the server where it is installed, and potentially the size of the database. If you are using the Esri platform, this could range from a simple implementation of SDE on a database, all the way up to Portal with multiple servers, users, applications and everything else involved. The licensing may be comprehensive as part of an overall agreement, but the SDE software will likely be called out separately in some form. If you have an established GIS program, then this licensing should be well accounted for and budgeted for. If you are a newer program and are not fully implemented, you may have a licensing contract that includes parts you are not using though you may plan to in the future. It could be a situation where it would cost more to not include them and instead purchase licenses for components separately, than to bring it all together even though you won’t use some portions initially. Mitigation: In the event that these licenses do come into question and need to be modified, there are a couple ways to approach a solution.
      • Review all server level licensing to consolidate as much as possible. If you have many servers with similar products on them but a narrow usage window, does it make sense, based on the capability of the servers, to combine some of these functions? Look for opportunities to do this especially for applications that may not be heavily used, but have been separated for some reason from the primary database. Do this for all levels of GIS server software. Ideally, you have a single primary datasource, with applications grouped to access that data as necessary.
      • Explore areas where some commercial software may be replaced with an open source equivalent. Perhaps you have a replicated database that is running software to provide an open-data platform. Is this something that needs to be licensed, or is there an opportunity to implement an open-source version that provides the same capabilities? There may be a time cost to set this up that may offset some of the software licensing, but it also may end up being cheaper over multiple years of maintaining the application. This is just one area to explore, but there are certainly open-source equivalents for a variety of commercial software that provide equal or in some cases, greater functionality, and the cost is only for implementation. If you are going to experience a budget cut, and were planning some new application deployment, that is an opportunity to explore open-source as well. If you are going to be spending time to implement an application, in many cases, the implementation time isn’t that much different for commercial vs open-source. Given that, then the only cost difference becomes the license cost savings. There are likely many of these where the question should almost be, “Why not utilize the open-source option?”
  • Software: Software licensing is always the kicker here too. When the budget gets cut, what is the biggest cost that will provide the most space in a budget? This is likely to be a huge focus. I really have 2 comments about this:
    1. Match your software licensing to your users: Give the ones who need advanced editing capabilities, the advanced licenses. Give those who need some editing, but mostly viewing, a moderate license. For those who only need to view data, give them a data viewer. This could be a web-app in many situations, or a pared down software version.
    2. If you are just viewing data and doing analysis, you shouldn’t be paying for a license: ArcMap and QGIS are basically at feature parity, with QGIS having many functions available that are limited to additional cost extensions for Esri. QGIS reads any Esri data format, and is able to perform analysis based on these layers. If you are in a licensing crunch, remove any viewer and analysis level licenses, and replace them with QGIS. This is not just fiscally responsible, but also risk-management. QGIS will never stop supporting the range of data formats it currently does. If you are forced to make a major change in database software, or versions, your commercial software may not support that, or may require an advanced level of licensing to unlock that ability. Don’t put yourself in the position of potentially getting cut off from your data. For that matter, QGIS should be a fall-back for even power users, as it will at least allow for viewing and map creation until a different solution is figured out if necessary. Finally, even if I advocate for an open-source fallback, there are some applications built on a commercial platform that don’t have an alternative. You need to make sure you protect your ability to license those as required, by reducing other unnecessary licensing that adds budget pressure.

Data Breach:

A data breach is going to have an affect on all parts of your program, so let’s talk about the technological implications. The depth of a breach will determine the amount of impact on your systems. This could be as deeply reaching as having to completely rebuild servers and the data they contained, from the ground up. There may also be impacts upon the entire IT apparatus at your organization as questions may be raised of how a breach was able to happen, what was compromised, and how to prevent this from happening again. Until these questions are answered to a satisfactory degree, you may have to deal with a much higher level of security, and a locked-down system. This may affect the way that applications work as they try to access data in the GIS. It will then have a corresponding effect on users as they either deal with applications running more slowly than before, or have to work through additional layers of security to even access these applications.

  1. Harden your system: This may seem obvious, yet I don’t think it always is. I feel like there can be a lot of conflict between GIS and IT. I think some of it is because IT doesn’t understand what is in a GIS, and the hardware and software needs that go along with it. Geo related staff then get frustrated because of push-back for their requests that while they seem crazy, are simply what is needed to run a system. Anyway, this needs to stop, due to exactly this risk right here. If you work with spatial, there are a lot of things you may be good at, but you can’t be good at them all. You need to make sure to be working with IT to ensure that the hardware, software, apps, and data you are utilizing, aren’t causing security holes in your organization’s network. Let IT do what they are good at, handling the hardware, server software, network, security, etc, to tie everything together. Keep the GIS locked down until you need to open something up. When you open something, do it in a targeted fashion, allowing the least amount of access necessary. Make sure that your software is all up to date, and ensure that IT does the same on your servers, appliances, etc.
  2. Stick with the standards: This may seem unrelated, but let’s think it through. Standards are developed in a way that they are widely applicable. Part of this is making sure they support security for whatever technology they implement. In some cases, the security is inherent to the standard, in others, the standard allows for a security layer to be wrapped around it. Either way, it is there, and your technology is going to work with it, not requiring some edge case workarounds to function.


Now that we’ve tackled risks that are directly related to the GIS program, whether the staff or technology involved, we have to look at some other risks outside program control. I call these operational since they affect the operation of the department alone and within the context of the rest of the organization. Here are a few examples of operational risks, and how to mitigate them:

Change to organizational structure:

This could be a change to the org chart of an entity, from moving some or all staff, or relocating the GIS department. This could then change how staff interact in the department and with other departments.
The main risks if you have a lot of people shifted around, or lose staff to different departments, is the loss of institutional knowledge related to the GIS. This could be either knowledge of the data, or the applications. These risks are much the same as the general risks a GIS program faces that were listed in the first section.


  1. Ensure that you have good documentation of your data and applications that you use. If you are using external applications, make sure you have good information on administration of the product. This is going to include a thorough and well-organized data dictionary. It should also include a comprehensive list of all applications used in the department, as well as information about what data layers they interact with, and complete login information. As a matter of fact, you really need to have a comprehensive list of login information for all databases and applications, including superuser/administrators. There should also be documentation of how to gain access to a system should logins be lost.
  2. Cross train staff. There is not a lot of specialization among GIS staff. I know people will respond by saying that they are a programmer who does some GIS, or works with spatial data, but for the most part, people in a GIS department are going to understand the fundamentals of working with a GIS. If you consider this to be somewhat of a given, then you should make sure that everyone in the department is familiar with the majority of the applications and processes. This is going to keep you out of a situation where a reorganization occurs, and you lose the one person who knew the big application server, to a different department. Hopefully if that were to happen there would either be some transition time, or since you are all still GIS staff, the ability to call them up for help while someone else is learning the system, but that is not a guarantee. You have to prepare for this ahead of time. Honestly, even the more programmer centric staff on your team should be cross trained on other applications and processes. Trust me when I say this will be as much to their benefit as to the department as a whole. Nothing helps you create the design or program for something as having a deeper understanding of what it will be used for, the data being put in, and the final product being returned. Give your staff a wide base even as they then start to specialize.

Integration of major new system or technology:

Your organization may want to bring on a new piece of software that touches all departments, and the GIS needs to be integrated. On the other hand, perhaps there is a desire to move all IT related items to the cloud. This will obviously change how the GIS is going to be structured and run. These are obviously hypotheticals, but there are a couple areas to speak to:

Software integrations:

If you have your GIS running smoothly, are creating and serving data, then the last thing you want to do is upset the apple cart. Actually, that may have been the stupidest thing I ever wrote. You always want to be improving your system. Don’t pass up the chance to connect some more data, or pull some threads together. That is how discoveries are made, and new uses are found. Anyway, moving on. If your organization is going to integrate a new financial, planning, or asset management program, there is a strong likelihood that there will be some integration of spatial data. You need to make sure you can integrate it without causing major disruption. The major risk I see in this sort of a situation is needing to integrate data coming from a new software package, and fitting that into what may be an already developed schema. This is going to be more true if you have added a lot of automation to your system that feeds change tracking, or pushes data out to other applications. Once you find out about a future integration, start researching. Here are some things to think about:

Data types:

What does the application require? What formats can it consume? What formats does it output? These will all guide decisions for either modifying your existing system to be able to talk to this new software, or setting up some process to work between the existing and new software. Mitigation:

  • Standards are important: Look to see what standards your system is using, and see if the new software supports any of these. It is going to be a hell of a lot easier to bring them together if you are working from a standard, than having to work with an API and figure out how to make it talk to your system
  • GDAL/ORG: You will need to move data in and out of the system. If there aren’t easy methods build in to the application to schedule data exports, you can use something like GDAL/OGR, which is an open source data translation tool. It supports approx. 200 formats in both raster and vector, which includes flat tables. It is also scriptable, so you can set up a script to pull data from the new application to bring back to the GIS. The assumption is that an application may be GIS linked so already have the functionality to pull data from the GIS without any additional scripting.
Integration to existing schema:

Part of the challenge of integrating new software is how do you utilize it as part of the GIS? Does it have existing integration with GIS so you can simply reference layers from the software database for analysis? Or, is it mostly a data program, and you need to bring data somehow into your existing schema? This could be the case if it has its own database, but it may be hosted by the provider, so you do not have access. Options: If the software has a GIS integration, you may be able to supply the data using existing layers. You will probably want to create a new view in the database to set the appropriate field names, data types, and other data options as needed by the new software. On the other hand, if it utilizes data you do not already have in the system, you will need to figure out the structure of these new layers. What attributes, values, etc. Are they spatial tables or simply attribute data? You will need to determine how to create this data. Is it something that can be collected from another source, or will it need to be created as the result of some analysis. With all of these options, you will want to decide whether to have the new views or layers integrated to your existing schemas in the database, or to create a new one to hold this data. The other side is again, the output data. If you are able to simply reference the software database, perhaps you don’t need to bring data into your existing GIS. This only works as long as you will have enough access to do the analysis and visualization that may be required. If that isn’t the case, then you may want to consider how to bring output data from the new software into your GIS in a set of tables that will be updated on a regular basis. You will need to determine how to integrate these into your existing data dictionary, whether in a new schema or distributed through your existing data groupings. Of course, if the volume of data is large enough, you may need to look at revising your data schema from the top down, and reorganizing as necessary. That could get complex. Needless to say, start planning as soon as you can.

Uses of output data:

Do these fit into existing applications? Do they need to be separate layers, or can they be viewed as joined data to an existing layer? You may want to create a new application that will allow for analysis of these new data combined with your existing data. Mitigation: Really the only way to figure out what sorts of uses the output data may have, is by speaking with the main users of the new software. They are going to know what they want/need to use the data for. Once you have an idea of their uses, you may be able to show additional ways to visualize, whether in a new application, or adding layers to an existing application.


How do you handle updating the data for the application and from the application?
Mitigation: If you can do a direct connection, you will eventually need to design processes that will push and pull this data on whatever interval makes sense for the individual layers. You don’t want to get in a position of needing to do this manually, whether it is you personally, or your staff.

Siloed Data:

A new software package, especially a SaaS product, introduces the possibility of data getting siloed, or stuck in that software.
Mitigation: Part of the planning process should be ensuring that the data goes full circle, from the GIS, to the SaaS application, and then back to the GIS. While usage of output data was mentioned above, this may need to go a step further depending on the type of application. If there are any city functions being maintained in the city, then there will be data created that definitely should be integrated back with the rest of the city GIS layers for display and analysis purposes. You may either simply want to reference layers from the application, if it is set up in such a way to allow this, or you will need to pull data out of the app and into tables in the GIS. This should then be set up to refresh on a regular basis, depending on the editing frequency of the source data in the application.

Relocation of data or applications:

It has been alluded to above, but there is a huge shift away from standard hardware servers that contain a single application, or dataset. There is also some movement away from having data or applications hosted locally at an organization. For financial and operational reasons, it may make sense to have some software be hosted by a vendor, so there is no install at the physical premise of the organization. With the move away from standard hardware servers, the question comes up of why do they need to be hosted on premise as well, especially when you have Amazon and Microsoft building out cloud computing platforms with far more flexibility for potentially lower cost than having the same infrastructure located on premise. As with anything, this brings up some potential issues:


This is I think the most serious issue. When you move either your data or your applications to a remote/cloud location, you introduce the need for a secure connection to those data or applications. Staff and/or public need to access these. As well, the applications and data may need to communicate with each other, and possibly back to the office location. There will be need for secure connections in all of these situations. Mitigation: This is another situation where you and IT need to be super good friends. Make sure that you are doing everything that you can with spatial applications and data to ensure internal security. Then, work with IT to make sure that these applications are secure if and when they need to pass into or out of the organizational network. This will be particularly important if you have any sort of database connections to or from applications in the cloud or on the network. If possible, you may want to keep these isolated outsize the DMZ, with a single pass-through between the DMZ and the internal network.


When you are working with applications or data which are not hosted locally, there are going to be some network impacts. This is less and less of an issue with the substantial increases in internet speeds, however it could still be an issue.
Mitigation: What you want to look for is looking at the bandwidth that maintaining data or application connections might use per person. You can take this and extrapolate that out to the people using each application, as well as taking into account other uses, and see what sort of connection speed you are going to need. As risks go, this is probably on the lower end of the scale, but the point here is to cover all your bases.

Siloed data:

I know that I’ve mentioned this as a sub-item for risks above, but I feel it is also an operation issue. This is a problem that raises its ugly head whenever a new application is introduced. Software always seems to be sold as the end all/be all to your organization’s problems, and with that comes a blindness to the fact that most organizations use different software packages for different business functions. I don’t know why, after all this time of software development this still occurs, but it is a definite problem. This generally goes one of two ways: The vendor wants you to use multiple pieces of software they offer as a package to handle multiple business tracks, or, they simply don’t consider any other software, or why an organization may want to share data across software platforms. What happens, then, is that these applications are implemented, and data starts being generated inside of them. It then gets trapped. Much of this data is tracking a variety of data important to a particular organization. It stands to reason that there may be value in having this data available as part of the larger GIS, for display and analysis in a spatial context. If all this data is collected, and then just sits in this application, its utility in a current management context is lost. It may be reviewed for historical trends, etc., but the ability to see current trends could be lost.


The cure for siloed data is to complete the cycle. Generally, these software programs are going to consume some form of data from the GIS, and use that along with other inputs to generate application specific data. You have to get that specific data out of the application, and linked back to the GIS. Fortunately, I think this is getting easier as time goes on, even accounting for the two categories of software vendors. Most organization-wide software applications end up storing data in a database. Since GIS data is also best stored in a database, there is already something in common. What you need to do then, is to figure out how to either connect them together, or do an ETL (Extract, Transfer, Load), procedure to pull from the application, back to the GIS. The nature of the application, and its utility to other facets of the organization, will guide the complexity and frequency of this transfer. The ultimate goal though, is to have data from the application, available for use in the GIS. If there isn’t the ability to do a database connection of some form, there are still ways to pull data. Often these applications have reporting modules. You can set up reports to pull data that you need, then instead of utilizing the formatted report, pull the query results that are used to generate the report. This can often be returned as a .csv file, which is then easily referenced into your primary database for querying purposes.

Data Breach

This has obviously come up in every section, and it only makes sense that it appears in an operational standpoint as well. You can have the best program, hardened and secure, and still have it grind to a halt because there is a data breach somewhere else in the organization. This could be a simple break in security where something got in through a hole and started to poke around for things, or it could be more serious. More serious to me would be a phishing attack that works and turns into a ransom/encryption attack where the entire system gets encrypted. This is something that won’t be solved in your department, but you will definitely have to react and recover from whatever happens.


The majority of the items here have already been listed under the data breach sections above. They still bear repeating as the likelihood is strong that you will experience a data breach in some form, so best prepare. When you are recovering from a breach outside of your department, you need to have your ducks in a row even more because you may need to be able to work on someone else’s timeline. This means you need to be prepared, and flexible. So, let’s talk through the list:

  • Document Everything: You need to have your system, including your data, applications, and processes, fully documented. This is going to allow you to rebuild them if you have to start from scratch. Of course, the way to not have to start from scratch is……
  • Backup Everything: You need to have regular backups. This should be not just your data, but your applications and processes as well. If you have processes set to run on a regular basis, and you should so you aren’t doing everything yourself, you need to have them backed up, so you don’t have to completely rebuild them through trial and error.
  • Cross-train Staff: Your staff all need to be aware of how your systems are put together. Even if they just do basic mapping and basic analysis, they need to know where things are, and how to get them working if necessary. Designate one or two people as trusted with the passwords to be able to get started rebuilding from backups. Don’t let it be just one, or they will inevitably be out when something goes wrong.
  • Work with IT: If you aren’t part of the IT department, make sure they know who you are, and that you work with them closely. Acknowledge the needs they have for system and network security, and ensure that you work with those in the context of spatial data and applications in your department. They are likely the ones who are running system backups, so if you have specific needs in this regard, make sure they know what they are, and provide them with the information they need to set things up. Also, and this has served me well, volunteer to be a test case. If they are testing something, let them know that you are willing to be a test case to provide data back and forth for what worked, what didn’t, and help them get it sorted out. This will build a level of trust both in your abilities, and your willingness to help them out. I’ve found that the more I do this, the more leeway I am able to get from IT with regard to my needs and requests. They know I will do my best to minimize or mitigate any impacts my testing may have on any other system. I also promise to never take down any other server but the GIS server, and then follow through on that. Test things, but do it in an isolated way. Just build trust and work with them when you don’t need them, so they will be willing to help out when you do need them.
  • Test your system for failure: When you are getting a GIS set up, you will test things until they work, and then stop. I think it is worth, and I need to do more of this, testing it to try and break it once it works. Have people try different things that may be out of the standard procedures to see what happens. Fix it where it breaks. See where things are slow, and look at them. They may indicate a larger issue with your system that was not apparent previously. Lastly, and this is something I picked up from an article I read recently, it may be worth specifically trying to attack your system from the inside, and see what happens. Most security is set to protect from an exterior attack. It is the ones that reach the inside, that generally cause the most damage. These are the phishing/ransomware attacks. The article I read talks about a suite of tests that Netflix does which randomly cause failures in their systems, that then have to be mitigated and recovered from. They made these openly available. It may be worth taking a look or building some of your own. The more you test what happens from the inside, the more you may be able to stop an internal attack from propagating through your entire system.
  • Don’t give up: This may be tryin to end on a note of sunshine and rainbows, but what the hell. If there is a data breach, and you have to put it all back together, it is going to look pretty damn intimidating. Don’t give up. Just run through everything you’ve already done to prepare for this occurrence, and you’ll get there. Don’t rush. This probably should be its own entry, but it is part of not giving up. Take the time it takes to get it done, but don’t try to get it all done at once. You will lose way more time trying to fix things if you try to take short cuts than taking just a little longer to get it put together right the first time. This also points back to your documentation. If you have documentation of everything, and have tested your processes for recovery, you will be able to be confident and just keep going. Trust yourself and your staff. You can do this.

Tying the Bow

Well, that went on a lot longer than I thought it would. There is a good chance that my explanations are longer than they need to be, or are not as tightly constrained to topic as they could be, and I’m okay with that. I feel like I’ve never run astray by giving more information than might seem necessary to the average user. I think part of my audience are people who don’t necessarily have a lot of experience in all facets of GIS, and are just trying to get started. Hopefully this will be useful to them. As for everyone else, perhaps there is a nugget or two in here that brings something to mind for your organization or department. Anyway, with all that being said, let us review the key points, and give some parting thoughts.

I started by talking about three types of risk: Program, Technology, and Operational, and what they each entailed. Program risks are related directly to the GIS program. Technology risks are related to the hardware, software and data which comprise the GIS. Operational risks include those which are not specific to the GIS program, but at a higher organizational level, and how they may then impact the GIS. They cover a range including staff changes, budget cuts, loss of interest in the program by upper management, and data breaches.

Having briefly discussed these three areas where risks could occur at a high level, it was time to dive in to each one, and get more detailed, along with mitigation strategies. Let’s run through each one briefly, with a potential solution:

  • Program – Loss of Key Staff; Budget Reduction; Loss of Confidence by Senior Management; Data Breach
    • Mitigation for the majority of these issues is the cross training of staff not only within your department, but in other departments. You need to ensure you have documented processes so that it is possible for new staff to come in and get started quickly, and so that existing staff can pick up on new tasks if there is department consolidation. Through this you need to communicate with other departments to find out and address their needs, and most importantly, do this with senior management. In the case of a data breach, all of the above will be critical to allow the department to rebuild data and services, while working with other departments to ensure that data and applications are brought back, all while working with management to keep them informed of progress. Having good communication with senior management will likely allow for a level of trust if a breach happens, that your department will be able to recover. Of course, all of the actions you take should be reducing the likelihood that a breach will occur in the first place.
  • Technology – Data Loss at a network, server, database, or user software level; Technology obsolescence at a hardware, database or user software level; Budget Issues which may include hardware, database, and software licensing; Data Breach and the potential affects on the network, data and databases, and workflows
    • Mitigation – The first step to avoid or recover from all issues technological is to ensure that GIS and IT have a symbiotic relationship. Make sure IT staff know what your needs are in that regard, and take their advice and guidance when it comes to making sure the GIS plays well with the rest of the IT infrastructure. When you talk about obsolescence, budgets, and data breaches, the single most powerful thing you can do is stick with standards. The next thing you need to do is keep things lean. If your system is based on accepted standards, you should easily be able to keep up with developments in the technology, because they are designed to work together, and keep working together. Budgets are easier to control because you don’t have a one-off solution that can’t be replicated. Instead, you have a system that is likely platform independent to a large degree. This means that if you get slapped with budget cuts, or a big licensing fee, you can replicate what you have on something cheaper, with little to no loss of functionality. This also holds true with keeping things lean. Keeping it simple keeps cost down, makes upgrades easier, and in many cases, simply makes it possible to fix things when they break. It is important to note here, that a simple system, does not mean that the data contained within, or the analysis you can do with said data, is going to be simple. I think quite the opposite is true. Instead of having to deal with layers of complexity in the support system, rely on standards for each piece of your system with the end goal of presenting your data and analysis in the best way possible. Standards enable you to then focus your efforts on deepening the links and threads in your data to further analysis and resultant knowledge.
  • Operational – Changes to organizational structure which may significantly move department or staff within department Integration of a major new system or technology which may cause impacts with software integration or on the pure technology side, wholesale movement of a software or data platform, whether to new software, or a platform move to the cloud, etc.
    • Mitigation – The first step to avoid or move through any of these issues is to have good documentation of the data and applications that are being used. This needs to be available across the board to all staff, and there need to be multiple staff who are familiar with the basic procedures of operating each application. This will minimize downtime if staff leave or are moved. This will also help coordinate when new systems are being integrated. Solid knowledge of the existing applications and data will ensure that a thorough integration strategy can be developed prior to system implementation. Other problems in this area can be solved by building off the documentation and cross training. If you are integrating new software, or moving applications, you will more easily be able to show how to integrate data from the new system into the GIS. As well, if a major move has to be made, having clear understanding of the applications and any existing integrations, will allow a plan to be developed to ensure continuity of access to all users, whether internal or external, public or private. You can ensure that the necessary networking infrastructure is in place, as well as security. This also ties back in with the other sections where a clear and solid working relationship with IT is critical. In order to integrate any new system, or move a major portion of the GIS infrastructure, planning with IT to ensure that critical items on both sides are planned for, and executed in the proper order.

Okay, that was a lot, but in a good way. With any risk, there is reward. You risked your time by reading this, so hopefully your reward is some useful tips to strengthen your GIS, in whatever form it currently takes. As I was writing this, a few things kept coming up, and they are worth stating again.

  1. Using standards is likely going to save you at some point. From the formatting and storage of spatial data, to presenting it on the web, to formatting your sites so they present your data and results effectively across all platforms, standards are what make that happen. Don’t reinvent the wheel, and don’t make it any harder than it has to be.
  2. Keep it simple, and straightforward. Don’t forget that ultimately the goal is to store data, serve data and maps to your constituents, and have a platform for analysis as necessary. The basics of this don’t have to be complicated. Database, spatial data handling, mapping software, web server, web site. That’s really all there is to it. If you need to add parts that are more complex, like integrating other software, or data sources, then just add those parts that are complex, don’t make the entire system a web just to serve data out.
  3. At this point, I’m going to come right out and say that if you have a GIS of any sort, then you should be utilizing an open source spatial database like PostgreSQL with PostGIS in some form or fashion, even as a static data dump that you write changes to regularly as a backup. Why you ask? It is simply that the system is free, first of all, has a large install and support base, and rigorous adherence to standards from the database, to the spatial data. It is robust enough to set it and forget it, so even if your GIS is completely Esri based, it can’t hurt to have this set up to programmatically push data into. What do you have to lose? Nothing. What do you have to gain? A stable repository that you can always get back to, regardless of budget, licensing, staffing, or any other condition.

One of the drivers for this whole post was a conversation where the other person who is also a GIS manager, said that it was too risky to add an open-source component to their system because they were already so invested in the Esri platform. That made me think about that risk and want to dive into it. I couldn’t just do that without doing a full run-down of what you may run into. Hopefully what you get out of this is that while there are risks inherent in any information system, many of them can be addressed in a fairly straightforward manner, making the real risk far less. In the case of my colleague, I think that the flexibility gained by adding an open-source component to your system far outweighs the risk of adding a bit of additional complexity. That, of course, is just my opinion. What do you think? What did I miss here? In the comments, please put the biggest risk that you see related to spatial data, and how you are mitigating that risk.

Leave a Comment