Data Chat: Christopher Mishaga

In the Information Technology sector, change is constant. For Information Systems Security Officer Christopher Mishaga, so is the need to protect the infrastructure and the integrity of NASA data.
Joseph M. Smith
Image
ESDIS Information Systems Security Officer Christopher Mishaga

NASA's Earth Science Data and Information System (ESDIS) Project manages the science systems of NASA's Earth Observing System Data and Information System (EOSDIS). This means that ESDIS not only develops, engineers, integrates, tests, and operates EOSDIS science systems, it also governs access to NASA’s trove of Earth science data for countless users, be they scientists or members of the public, around the globe. It also means that the ESDIS technical staff—the analysts, developers, engineers, programmers, and information security specialists, and others—is tasked with keeping EOSDIS technology infrastructure secure and running smoothly. In the over 30 years that ESDIS has been developing and operating EOSDIS, ensuring the security of the systems has grown from a part-time single job to requiring a team of specialists using sophisticated systems. A chief concern among those tasks is considering the security impact analysis of every change or upgrade to the EOSDIS network.

One of the key members of ESDIS technical staff is its Information Systems Security Officer Christopher Mishaga, whose responsibility is to oversee the protection of EOSDIS data and the prevention of unauthorized use of its systems. In the following interview, he discusses the differences of on-premises systems and cloud systems, the work that’s required to keep EOSDIS systems up-to-date and operating as expected, and the ongoing efforts to balance data security with NASA’s promise of open data.

You manage security on two Earth Science systems — on-premises systems and cloud systems. What are major differences or concerns between the two when it comes to cybersecurity?

They’re very different. With traditional data networks, you’re dealing with software, but you also have to deal with the hardware, and that includes facility management, cabling management, the hardware itself, and then the software running on it. In cloud, we only deal with the software part and with what’s called an Application Programing Interface (API) to make your changes, so everything becomes software defined. That’s somewhat of a change when you have to move from on-premises system to a cloud-based system.

The good thing about it in terms of security is, because everything is software defined, we can better manage configuration and monitor changes that are occurring on the system. So, change management becomes a little bit easier because you’re doing everything in software, so you know what the state of the system should be and it’s fairly easy to fix, and when things get out of whack, you can ask, “Why is this happening?” From a cybersecurity perspective, changes should only be occurring on a regulated interval or schedule, and when things change without you knowing, then it’s time to start looking into an incident.

When working in a cloud-based system, there are security concerns that differ from those of a traditional data center and can impact your bottom line. In a traditional data center, the expenses can range depending on how much money you spend, but certainly the buildings are a capital expenditure. This is important because there is a security component here too. If someone is attacking your on-site system, you deal with it and fix it and it goes away. In the cloud, the vendor is charging you for everything you use, whether you wanted to use it or not. So, for example, if someone is misusing your system to mine bitcoin or causing a denial-of-service (DoS) attack, your systems are still spinning and you’re being charged for it, so you also have to pay attention to the potential financial impacts.

You mentioned that there are certain activities that take place on a schedule. Can you discuss an EOSDIS cybersecurity activity that takes place on a regular, perhaps annual, basis and why it's important?

There are two that immediately come to mind. There’s our annual Assessment and Authorization, which is a federally mandated process. Every year a third party comes in, looks at our documentation, looks at all the continuous monitoring data that we’re collecting, and performs point-in-time inspections on our machines to make sure that everything is as we say it is in our security plans and that it meets NASA’s standards. It’s important because it forces us to prepare for the inspection, and the third party verifies that NASA executives understand the risk on the system and makes them aware of any unmitigated risks that we need to address. Ultimately, the process culminates in the signing of a document called an Authorization to Operate (ATO). So, once we’ve addressed all the risks, the authorizing official or NASA executive will provide the system with the ATO that allows us to continue our operations. We have always received our ATOs. In fact, we just went through our assessment last month and were recommended for another ATO.

The other annual exercise that I like to bring up is contingency planning. As engineers, we should plan for the unexpected. We should ask ourselves, “What can go wrong in the system?” Contingency planning includes testing our contingency systems. A lot of the time, our contingency systems may not be active and there could be other changes going on in the network throughout the year that might affect the way our contingency systems come online. The requirement for testing our back-up systems is once a year, but we should be doing it more than that. There is ransomware out there and the government is not going to pay a cyberterrorist holding our data hostage. Therefore, we have to be absolutely certain that we’re backing up all our data and doing it in such a way that, if we needed to restore it, we could do it.
 

One often hears the term "patch" (i.e., software and or operating system updates that address security vulnerabilities within an application or product) used in relation to cybersecurity. What is the importance of patch management?

Patches get deployed to the system regularly, typically monthly or bi-weekly and a lot of that is largely automated. Members of our team get information on patch deployment from a database and put it in an analytics dashboard to calculate vulnerability or risk scores based on the patches required and the state of our systems. There is one person on our team who is really good at that, and he has taken all the information from these systems and implemented a method of risk assessment that incorporates the method the U.S. Department of Homeland Security (DHS) came out with a few years ago called the Agency-Wide Adaptive Risk Enumeration.

If we’re missing a patch, there are, or at least there should be, many other things in place on the network to prevent the vulnerability associated with a missing patch from occurring. We have firewalls in place, two-factor authentication, and log systems, so, even if a system is missing a patch, the risk is often very low because to exploit it someone would need to get around all of the other things that we have security controls for. One of the things we can do in the dashboard I mentioned measure system risks, and there’s more to risk than saying a patch is or isn’t there. The dashboard allows us to better understand the risk to the system because, certainly on the mission system side, we can’t always patch right away. There may be a maneuver going on, or there may be a problem with the patch, and so on. The managers and the executives always want to know the risks of a vulnerability. Using this system, we can tell them, “Yes, you’re missing a patch, but we have all these other mitigation processes in place, so your risk score is more like a three instead of a 10.”

If there are other security measures in place to guard against any vulnerabilities, what is the importance of patch management?

Patches are still the ultimate mitigation. Yes, you’re doing all of these other things, but it’s not truly fixed until it’s patched. Recently, there’s been a lot of attention on the patch management of NASA systems because, historically speaking, we would stage these patches a lot farther out than what DHS and the NASA Chief Information Officer felt was appropriate. They have asked us to compress and shrink that timeframe. The only way to really do that is to perform automated updates daily, but you can imagine that, on some systems, daily automated updates may actually cause more problems. So, we really can’t do that on everything. So, we have to schedule it and phase patches in. Therefore, we have developed patch-management plans where things are done on a 30-day schedule, a two-week schedule, and, although we try not to have it this way, some systems are on a longer-term schedule, such as every three or four months. Of course, not every vulnerability has a patch, so that’s where those other mitigations really come into play.

With all of the news about hacking and ransomware attacks on both public and private information systems, what is the best way for EOSDIS to balance user authentication in the interest of data security with our promise of free and open data?

That’s a great question and there are a number of ways to balance these concerns. Yes, we have an open data policy and that’s a good thing. Our cybersecurity work at ESDIS has less to do with securing data, as compared to a bank or a healthcare facility. They’re concerned with protecting the data they have. We’re giving it out, so our job has more to do with protecting the infrastructure and the integrity of our data. There are no secrets here — our information is public. In one way, that makes our job easier. In another, because we are so open, we have a large attack surface. We have a lot of interfaces and APIs and public webservers. The best way to protect these things is to move toward what’s known as a zero-trust architecture. What that means is we give a specific level of access to our data and systems based on a user’s credentials. If we have a high assurance of a person’s identity, such as a NASA smartcard, they’re going to have more access to NASA data than someone who’s just surfing the web and lands on one of NASA’s public-facing web pages. This level of access would also be different from someone who has a log-in for our systems that allows them to access certain datasets or share information with us.

Moving toward that zero-trust architecture is going to be the best path ahead for us in the future and, honestly, the federal government has already mandated it and released a directive about it last year. This really applies to ESDIS because, if we do this and we do it correctly, we’ll be able to share and take-in information even easier than we do now.

What is the one thing you'd like EOSDIS users to know and perhaps appreciate when it comes to cybersecurity?

I have two things, because we generally have two types of users. The first and most important is to protect your credentials and to use two-factor authentication whenever possible. It’s so important nowadays to have a good password, to not share it, and, if possible, to forego using a password in favor of two-factor authentication, whether it uses an app on your phone, a token, or a smartcard. That’s really the best way and the easiest thing people can do.

Second, we have a lot of developers, and users who are developers too. Developers need to be extra careful to never use their credentials or password or their API keys in their software code. It’s true that developers need to add their credentials to their code to make it work, but the mistake they often make is hard coding that information into their software scripts and then saving it to a code repository like GitHub where everyone can see it. A better way to do it is to put that information into your scripts as a run-time variable or use something like Amazon secrets manager, which stores developer credentials, especially API keys, in a safe manner.

Change is a constant in the cybersecurity and information technology worlds. What keeps you motivated to respond to the newest technological developments or ever-evolving threats?

Working in the cybersecurity field has always been an uphill battle and probably always will be. So, I think the challenge of the task at hand, needing to use more than just technical skill to be successful, is what generally keeps me motivated. I also have a great support team who are also highly motivated to find solutions and that really helps. Everything from having management support, peer support, to the unending creativity of individual team members that make a positive impact. Training is also important here in responding to how the field is evolving. Having (ISC)2 credentials has been a big help in forcing myself to keep up to date. Every time I attend a security conference or seminar I walk away with new insights and motivation to share that knowledge so we can take the right actions to counteract new threats.

Last Updated