');
Published January 12, 2026
In Episode 52 of The Baldy Center Podcast, Raj Sharman (School of Management) discusses how healthcare systems and public institutions prepare for crises. Drawing on his research in disaster response, hospital information systems, and telehealth, Sharman explains why resilience depends on preparedness, trust, communication, and equity, not just technology, and what these lessons mean for healthcare systems in an era of AI and growing institutional risk.
KEYWORDS: Disaster Preparedness, Crisis Response, Hospital Resilience, Healthcare Systems, Institutional Design, Risk Perception
HASHTAGS: #HealthIT #DisasterPreparedness #CrisisResponse #HospitalResilience #Telehealth
You can stream each episode on PodBean, Spotify, Apple Podcasts, and most any audio app. You can also stream the episode using the audio player on this page.
Note: Audio Player may not function ideally on small-screen devices.
What we are seeing is that technology alone doesn't guarantee better health outcomes. It's the way equity, culture, and communication are woven into the use that determines whether communities truly benefit. First, equity matters because access is uneven. If patients don't have reliable broadband devices or digital literacy, telehealth can actually widen disparities rather than close them. [...]
Second, culture shapes trust and adoption. Language barriers are one of the most powerful determinants of whether telehealth succeeds. If telehealth platforms ignore the cultural context, they risk alienating the very population they aim to serve. [...]
Telehealth is not just a technology innovation, it's really a social good. It depends on whether we design systems that respect equity, honor culture, and prioritize communication. All of it has to function."
— Raj Sharman, PhD
(The Baldy Center Podcast, Spring 2026)
The Baldy Center for Law and Social Policy at the University at Buffalo
Episode #52
Podcast recording date: 12/9/2025
Host-producer: Tarun Gangadhar Vadaparthi
Speaker: Raj Sharman
Contact information: BaldyCenter@buffalo.edu
Transcription begins.
Tarun:
Hello and welcome to The Baldy Center for Law and Social Policy Podcast produced by the University at Buffalo. I'm your podcast host and producer, Tarun Gangadhar. In today's episode, I'm joined by Professor Raj Sharman to discuss how healthcare systems prepare for and respond to crises and how information systems and institutional design shape resilience and access to care. Professor Sharman is a professor of management science and systems at the University at Buffalo. His research focuses on disaster preparedness, healthcare systems, and institutional resilience, examining how technology, governance, and policy shape public safety, equity, and access to care. Here is Professor Sharman. To begin, what first drew you into the research on disaster response, healthcare systems, and institutional preparedness?
Raj:
I would like to say my own experiences from a personal perspective of having lived through hurricanes. I lived through Hurricane Andrew and other hurricanes that were a bit smaller, but also experienced earthquakes and typhoons. That experience made me see the vulnerabilities in organizations, in systems, and how it impacted people. The same is the case with healthcare. I have a very personal experience with it, with family. My thesis is also in healthcare. I had data from MD Anderson Cancer Research Institute, and I worked on the MRI and CD images, and that also brought me closer to realizing my research should make a difference in the lives of people. Part of that process was also looking at organizations and systems. These are supposed to make life easier, but they do so only if they are successful in terms of how they are ingrained or incorporated in systems, people, and policies. I thought that there were many gaps in this process, and my work can make a difference in this area. But deeply, I was interested in making a difference in the lives. I also found that there are many opportunities to contribute from an intellectual perspective and provide conceptual clarifications in this space. Those are some of the deeper motivations that drew me into this. There's an urgency to work in these areas because it can help people and societies do better in the face of disasters.
Tarun:
Thank you, Professor. What stood out to me is how personal experiences often reveal vulnerabilities and systems that people usually overlook. Much of your work looks at extreme events such as hurricanes, mass casualty incidents, and active shooter scenarios. What have you learned about how institutions succeed or fail when a crisis strikes?
Raj:
One of the most important lessons I've learned is that institutions don't succeed or fail because of the crisis itself. They fail or succeed based on the system, the culture, the preparedness that they have built beforehand. When an institution succeeds, it's usually because they've invested in resilience and adaptability. The factors that impact resilience and adaptability needs to be studied in many organizations in more detail. But at a very high level, to speak of it, communication channels are important, clear communications is important, and empowering decision makers at various levels in the organization is extremely important because inevitably communication systems break down. And only when people are empowered to take decisions will they do so confidently. Making sure that service is available to people, even when technology breaks down. It's extremely important, and I contrast that because in many organizations for efficiency purposes, we centralize things. And if you've built a centralized environment where all decisions are made, then lower level people are not thinking as freely. And I would say this is one of the major challenges. And often some of these issues can be mitigated if there are meetings and preparedness on how authority would be delegated. Many organizations have such structures, and they need to practice a little bit more so that they can recover well. And while recovering, it's not about just bouncing back economically, but also making sure, especially in healthcare situations, that all of the disenfranchised communities are also brought on board. It's very important to help marginalized communities because they suffer a lot when these events occur.
Tarun:
You highlighted that institutions often succeed or fail based on what they build in advance, not just what happens during the crisis. You have written about resilience in hospital information systems. What makes a healthcare system resilient and where do you see vulnerabilities that policymakers often overlook?
Raj:
What makes healthcare systems resilient is not just the technology, but how people, processes and information come together under stress. I mentioned earlier, empowerment and prior preparedness and training are very important. These still remain important. And for a hospital system to be able to absorb shocks, adapt quickly and continue, resilience has to be more than just a technology issue. It's really a mindset. It is about building systems that anticipate disruption and preparing for those disruptions, empowering people, as I said, and also building other factors that help them bounce back. Now, I'll give you an example that when we had the October [2006] storm, many of the people who were working in the hospital could not reach the hospital because the civil infrastructure was decimated. There was snow on the road, in which case the people in the hospital had to continue working, catering to the people who were already there or who were arriving into the hospital. In this case, some cross-training was needed so that people who were maybe nurses in one department could also function as nurses in another department. Physicians could also cross work in other areas to the extent that was feasible. These are parts of the preparedness document because if you create a committee that deals with crisis, that committee may not be able to assemble during a crisis because the civil infrastructure maybe closed. In this case, the roads and transport, then they can't make it to the hospital. So the remaining people who are left in the hospital should be able to work, and that kind of preparedness is needed. It's always preparedness that is needed. Technology is more easily overcome because in the hospitals these days, we also have a printout of the surgeries that are going to happen the next day. So if we lose the information system, we can still proceed. We have the patient priors also printed out. So there's a lot of planning that has been going on over the years. I think preparedness is very important. Empowerment of the employees at different levels of the organization so they are confident in taking the right decisions when it is needed. And finally, I'd like to say that the security gaps cause vulnerabilities in hospital systems. And though we have seen ransomware impact many healthcare institutions, the investment in security and the investment in training and preparedness is not at the level where it is well-matched with disasters. Finally, in all healthcare systems, I want to say that recovery is also about equity and access to marginalized communities because these communities suffer more. So whenever you do preparedness, you must take care of the fact that we are dealing with diverse populations, some who are more able and some who are not able, and how to address those issues. For example, if the hospital, as we had in the October 2006 storm, how do marginalized communities, if they have a severe impact of their heart condition, how do they make it to the hospital? What do we do and how do we treat them? There may be many ways to deal with this, including remote health, but those have to come as a planned activity rather than last minute heroics. Yes, one can do things in the last minute, but a lot of it can be well-planned. And in the area of service management, there is this issue of service recovery where we actually plan for service failures. This has to be done at the community level, at the hospitals, in educational institutions, so that it's not a last minute thing. It's where these are all anticipated. And as much as you can anticipate, the better you are in being able to recover.
Tarun:
I found it interesting how you framed resilience as involving people and processes, not just technology. Your research on telehealth efficiency during COVID found deep geographic and equity gaps. What does this tell us about the limits of policy expansion without structural capacity on the ground?
Raj:
What our research revealed is that policy expansion alone, like rapidly scaling telehealth during COVID, cannot guarantee equitable access or efficiency if the structural capacity on the ground is uneven. For example, telehealth policies were designed to broaden access, but the reality was that geographic and equity gaps persisted. For example, rural communities often lack broadband infrastructure. So telehealth policies giving people money for access allowed people, opened the door, but many people could not walk through the door because they did not have the infrastructure. Low income households struggled with device access and digital literacy. Often the social determinants of health played a role. For example, they [some patients] were not able to communicate in English with a physician if they were speaking a different language. Education levels impacted the use of telehealth. The telehealth policy which provided access was not efficiently utilized across the country. Policy is necessary, but it's not sufficient. Expansion creates opportunity, but without investment in infrastructure like broadband, training, technical support, and sometimes support at the patient end, to translate what the patient is communicating to the physician are important to structural issues that need to be addressed. Yes, money is important, policy is important, but it has to be very targeted so that all the people, older people, marginalized communities can all benefit from that investment. Policymakers often overlook these structural realities in the quest to generalize, to throw money and to say that they have solved a problem by providing greater investment. But these investments have to be more targeted. This is what we felt. And it has to take into account community support system, the infrastructure into account, and also how the receiving population is structured in terms of education level, language abilities, and so on. The social determinants of health have to be taken into account so that even marginalized communities can benefit. Older and senior citizens can benefit as well. I think that would be a clear lesson that we learned from the telehealth policies that were implemented post- COVID. Mind you, it did benefit a large segment of people, but the efficiency could be more even and could have been a lot better.
Tarun:
What stood out is your point that expanding access through policy does not always translate into real access on the ground. You emphasize language barriers and information access as major determinants of telehealth services. How do equity, culture, and communication shape whether technology actually benefits communities?
Raj:
It's a wonderful question. Thank you. What we are seeing is that technology alone doesn't guarantee better health outcomes. It's the way equity, culture, and communication are woven into the use that determines whether communities truly benefit. First, equity matters because access is uneven. If patients don't have reliable broadband devices or digital literacy, telehealth can actually widen disparities rather than close them. A resilient system ensures that vulnerable populations, rural residents, low income households, non-English speakers are not left behind. If they are, then technology has widened the gap rather than narrowed the gap. Second, culture shapes trust and adoption. And therefore, if you look at a lot of that e-commerce research, the research was focused on building trust in the e-commerce systems when it first came in. Now we take it for granted, a lot of us trust the system, but trust is a very important component in people adopting systems, even AI systems. Would you take AI's advice, advice that gives you with regards to health or on any other matter? It depends on how much you trust that system. So communities approach healthcare differently depending on cultural norms, expectations, and prior experiences with infrastructure. If telehealth platforms ignore the cultural context, such as family involvement in care decisions or sensitivity to stigma, there are stigmas of various types. In some places, the mental problems that people may have, such as schizophrenia or deep depression may cause it, or sometimes STDs may cause stigma in society. They risk, if these are not taken into account, they risk alienating the very population they aim to serve. So communication is the bridge. Language barrier, and I say this, and I cannot emphasize this adequately. Language barriers are one of the most powerful determinants of whether telehealth succeeds. Even for people who can speak English, well, can you communicate what is the problem to you adequately and sufficiently so that the physician can understand exactly what you're communicating? This is sometimes difficult even in a face-to-face situation and the physician you see asks two or three different questions. Oftentimes, if the physician is talking in more technical terms, some of what he says may not be fully understood, even by those who are very proficient. Often, we may not be self-advocates in terms of inquiring what are the side effects, what are the risks? So all of these, I place them under the communication issue, and I say that that has to be bridged. Technology therefore can become a barrier rather than a facilitator. Multilingual support, cultural competent providers, culturally competent providers who understand the communication, and clear communication strategies are essential for telehealth to be effective. Telehealth is not just a technology innovation; it's really a social work. It depends on whether we design systems that respect equity, honor culture, and prioritize communication. All of it has to function. When those elements are built in, technology becomes a tool for inclusion rather than exclusion. And we need to be very careful on how we use technology if we want to bring all of the society along and no one is left behind. And ideally, that's our goal.
Tarun:
I appreciate how you describe telehealth as a social challenge where trust and communication matter as much as the technology. Your research on health information exchanges, platforms designed to improve care coordination highlights important institutional challenges. What governance or structural barriers prevent these systems from delivering their intended value?
Raj:
I'm very happy that health information exchanges (HIEs) have come, was supported by the government and funded, and they are very good institutions. They have gone a long way in making sure that patient priors and health information is available to people, to different physicians. So when you have a person admitted to a hospital and is discharged, there is a document called the continuity of care documents, CCD, which is made available through the health information exchange. Another physician wanting to review what happened during the previous episode has that information. And then at least in most places, the penetration of health information exchanges is about 99% or higher. Most of the hospitals and insurance companies have bought into it. In the primary care and the specialty care, also by and large, we use and connect to the health information exchange. But there are many challenges, and that provides opportunities for further work in this area and provide improvements. One of the improvements that has happened is the regional exchange. And in New York, all of the HIEs are connected through a more centralized network. So if you are in Western New York and you go to New York City, they may still be able to access your record through a sequence of exchanges between the health information exchange. That said, HIEs often involve multiple stakeholders, hospitals, clinics, insurance companies, public health agencies, each with different priorities. Without clear governance structures, decision-making becomes slow and trust between participants erodes. This is especially true because most of the hospitals provide that information in terms of the CCD documents. But when a person goes to see a primary care or a specialist and they provide some medications and treatment that is not easily accessible to another physician. For example, if you're with primary care A or you have gone to neurologist A, and then you are going to cardiologist B, he may not be able to see all of the records that a resident in neurology A's information system, or EMR systems or EHR systems, because that is resident local. It can be requested and then obtained, but it's not instantaneous. Whereas things that happen at the hospital and your record at the hospital is more easily obtainable. And I think there may be a variety of reasons for it. We are not moved to a place where we can go to grocery store A and buy potatoes and then go to grocery store B and buy potatoes the next time we shop. Healthcare has not gone to that extent. We need patient priors. We are very comfortable with certain physicians, our insurance directs us to certain physicians, so we go there. But then how fast can we transmit that information to another primary care or secondary specialist care? That pathway has to be bridged. Health information systems are one mechanism to do that. And there are different models of health information systems, in terms of its architecture, some are federated and some are more centralized. These, again, pose different kinds of challenges in the transport of health information. There are some gaps that research and practice both have to address. We could go through a long list: have health information exchanges met the challenges that they were designed for? My thing is that no, it's a work in progress. They have done a lot of good in the sense that in emergency departments, they're able to access prior MRI and CAT scan information, so they don't have to duplicate tests. Certainly it has reduced the duplicate test, but we have a few more steps to go, or many more steps to go to have a totally seamless system where health information is available at the fingertips of any physician who is dealing with you at the point of care. To address this, we have the health information exchange pathways, but Epic and Cerner and these EMR systems are being increasingly bought by healthcare systems, like for example, the Kaleida health system. Now, these Epic systems have a centralized database, and they’re very powerful and serve as an alternative to health information exchanges. So that if you are going to a Catholic health primary care and a specialist from Catholic health, they can just go to their own database, which will be in the Cerner system or the Epic system and access it. So the pathway that goes through the health information exchange may now be short circuited by Epic. And in fact, Epic’s new system is very powerful and allows for sharing of information with physicians of different specialties and hospitals they own using their Epic system, but they will still, anytime people go to the hospital, they still put the patient information, the CCD document in the HIE so that others can access it. But alternate paths are being developed because the EMR systems are now stepping up and they're also adopting new technologies like ambient AI systems and pervasive AI systems so that some of these gaps are bridged. So when the patient moves from one system to the other, the information is also available to the attending physician at the point of care. Finally, I want to say that some of the privacy and liability concerns are still to be addressed. We do have HIPAA compliance, which through patient consent is being achieved and patient consent for HIE, I think we had almost all of it, their penetration is pretty good. And I'd like to say a lot of goals have been achieved, but these are real opportunities to make intellectual contribution and contribution to practice. These are challenges I look forward to working with in the future.
Tarun:
It's clear that even when systems exist, coordination and governance often determine whether they truly work. You have argued that information assurance and risk perception affect both technology adoption and system effectiveness. How should public institutions design systems when the public may distrust or misunderstand them?
Raj:
Let me first get to my own research, which was looking at automated systems. When public institutions design systems, especially in areas like health, safety and digital services, they have to recognize that technology adoption is not just a technical problem, it's a social one. Information and assurance and risk perception shape whether people trust and use these systems, and distrust and misunderstanding can undermine effectiveness, even if the technology is sound. There are some key principles that I want to talk about, transparency and clarity. Systems must communicate clearly how data is used, how it is protected, and why it matters. Ambiguity breeds suspicion. It allows people to fill in the gaps and leads to mistrust. When people understand the safeguards in place, they are more likely to trust and adopt the system. And this also goes for AI. What are the safeguards you have built so that the AI system has not learned wrong things, giving you wrong information? And with respect to AI systems, they're very important because it's not the data that you use to train the models, but as you keep using it, the AI system is learning from fresh data, and it could go off kilter. So very important for organizations and for individuals to build guardrails and understand what the guardrails are, which work on a more continuous basis. That is very important. Checking the guardrails, checking how the data is getting corrupted over time or improving over time is an area of investment where people have to understand the data that is getting collected from which the models are being built, or the AI response is being tailored. This also then has to take into account if it is equitable, if it is fair. We have seen many AI systems go off kilter because of that. Many automated systems we test at the beginning before the software is released. With AI systems, not only you have to test when you ingest the system into your organization, but it has to be a continuous testing process while you use the AI system so that the system is not going off kilter. IT systems, IT departments, and maybe another department that has more business sense comes into play in the continuous testing of AI systems so that they're very sensitive to the customer requirements. It's not a technology problem alone. I emphasize that because technology people tend to capture this territory, but there has to be the customer-centric focused and the business-centric focused people who also have to monitor that AI system. Community engagement and cultural alignment are also very important. Institutions should involve communities early in the design process, listening to concerns and tailoring the system to cultural context. The lesson is that technology effectiveness depends much on trust and perception as also on the technology itself. Public institutions succeed when they treat system design as a partnership with the public and as an ongoing engagement rather than a one-way deployment.
Tarun:
You've highlighted that trust, transparency, and ongoing engagement are not optional, especially as institutions adopt more advanced technologies. Finally, if listeners take away one key insight about technology, public institutions, and social impact from your work, what would you want it to be?
Raj:
If there is one key insight I want listeners to take away is that technology by itself does not create social impact. Institutions and communities do. The effectiveness of any system, whether it's telehealth, emergency alerts, health information exchanges, AI, depends on how well it is embedded in structures of trust, equity, and communication. Technology can expand the possibilities, but if the public doesn't trust the institution or the technology, or if inequities in access and capacity aren't addressed, those possibilities remain unrealized. What my work shows is that resilience and impact come from designing systems that people can understand, trust, rely on, and see themselves reflected in. So the big lesson is this. Technology succeeds when institutions treat it not just as a tool, but as a relationship. That goes for AI. It's very important in the age of AI. When systems are built with transparency, inclusivity, and responsiveness, they don't just deliver services. They strengthen the social fabric that makes communities more resilient in the face of crisis. And I wanted to add one more thing here, which I alluded to. With AI, it is extremely important because the data on which the models are built at the time of its installation or innovation keeps changing as the AI becomes a self-learning tool. So there is a chance of it going off the guardrails. Important guardrails have to be set so that through the lifetime of that AI technology, we have monitoring from our business perspective and from a social and from an inclusivity perspective so that throughout the life of the AI technology, the system doesn't go off kilter. This helps build that. And we need to make sure that these guardrails that come not just from the technologist, but from other parts of the organization or society are built-in in an ongoing monitoring system of that technology.
Tarun:
Thank you so much, Professor.
Raj:
Thank you.
Tarun: That was Professor Sharman, and this has been The Baldy Center for Law and Social Policy Podcast produced by the University at Buffalo. Let us know what you think by visiting our X, formerly Twitter @baldycenter or emailing us at baldycenter@buffalo.edu. To learn more about the Center, visit our website, buffalo.edu/baldycenter. My name is Tarun, and on behalf of the Baldy Center, thank you for listening.
Transcription ends.
Raj Sharman
RELATED LINKS
>Faculty profile
>Center for Information Integrity
RESEARCH BACKGROUND Raj Sharman's research is focused on extreme events from a decision-support system perspective and on health information technology-related issues. This includes factors influencing online health information search, meaningful use of ambulatory EMR, resilience in hospital information systems, health information exchanges, health care social networks as well as a simulation based study for managing the hospital's emergency room capacity in extreme events, active shooter incidents and mass casualty event management.
Sharman's papers have been published in a number of national and international journals, and he is the recipient of several grants from the university as well as external agencies, including the National Science Foundation.
Sharman serves as an associate editor for the following journals: Journal of Information Systems Security, Journal of Information Privacy and Security, and Springer Security Informatics Journal.
EXPERTISE
Institutions don't succeed or fail because of a crisis itself. They fail or succeed based on the system, the culture, the preparedness that they have built beforehand. [...]
Resilience must be more than just a technology issue. It's really a mindset issue. [...]
Technology itself does not create a social impact. Institutions and communities do. [...]
Technology succeeds when institutions treat it not just as a tool, but as a relationship. [...]
— Raj Sharman, PhD
(The Baldy Center Podcast, Spring 2026)
Tarun Gangadhar
Tarun Gangadhar Vadaparthi is the current host/producer for The Baldy Center Podcast. As a graduate student in Computer Science and Engineering at UB, Vadaparthi's research work lies in machine learning and software development, with a focus on real-time applications and optimization strategies. He holds a bachelor’s degree in electrical engineering from NIT Nagpur and has also completed a summer program on Artificial Intelligence and Machine Learning at the University of Oxford. Vadaparthi's research and projects are rooted in data-driven decision-making, with a strong commitment to practical innovations in technology.
Matthew Dimick, JD, PhD
Professor, UB School of Law;
Director, The Baldy Center
Amanda M. Benzin
Associate Director
The Baldy Center


