About Open Referral (21)
The Open Referral Initiative is an open community of practice, which means that our documentation and artifacts are freely accessible and adaptable by any participant (assuming they abide by our principles and values). Our network is open to anyone. Anyone can start a ‘pilot project’; in some cases, we develop semi-formal or formal partnerships in these pilot projects. Local pilots are usually where new tools, ideas, and practices are developed, tested, and evaluated before being ‘scaled’ through the Initiative itself.
Our Community Forum (a Google Group) is the official channel for updates and occasional discussion. We also host occasional Assemblies (see archived videochats here) and convene in semi-formal workshops. Our Slack is where we chat. Our document archive is publicly accessible on Google Drive.
Lots of people are trying to build Yelp-type applications for social services. That’s not Open Referral’s role. There’s a range of reasons why:
First, information about human services is a lot more complex, variable, and sensitive than restaurant data – and unlike restaurateurs, many human service providers might not see the benefit of being listed in a Yelp for Social Services. This is a problem that simply doesn’t correspond with conventional supply-and-demand market dynamics that make Yelp possible.
Second, we want this information available in many channels and applications – including Yelp itself! Plus Foursquare, Google Places, Facebook, etc! Yet even these wouldn’t meet the needs of every user in every situation. Resource directory data should be itself a public good, freely ‘remixable’ by anyone, not trapped within any one company’s interface.
There are a range of other reasons why we don’t think this problem has one simple ‘Yelp for Social Services’ answer. Check out this talk for more insights, and email email@example.com to discuss further.
Why yes we have seen that cartoon, many times, thanks, but no, the cartoon doesn’t show that efforts to develop standards are futile. To the contrary! We encourage skeptics of standards to take a closer read. The XKCD cartoon identifies a collective action problem, in which competitive efforts to solve the problem by competing against all other standards are doomed to fail. Collective action problems can be solved, however, by cooperation! If a new standard is designed to enable interoperability among other standards – voila!
Open Referral takes this cooperative approach. It may be less sexy, but it’s more effective. The Human Service Data Specification and API protocols are designed to enable data interchange among different ‘data languages.’ We don’t have to solve everyone’s use case! We just make it easier for different actors using systems with different standards to cooperate across these boundaries. Instead of one standard that ‘beats’ them all, there’s now one standard that enables cooperation across them all.
Many of us, at one point or another, were hopeful that technology would provide quick fixes to systemic problems. Those hopes haven’t panned out. It turns out that some problems are so tricky that they can only really be fixed by lots of people working together over time.
To be more specific, the root of the community resource directory problem lies in an array of perverse incentives and disincentives across the health, human, and social service sectors. For instance, many providers of health, human, and social services just don’t have strong incentives to be found by more clients! This means their information just might not be out there on the internet – not in a sufficiently detailed and up-to-date form that technical wizardry like web scrapers and chatbots might need to generate reliable results. Ultimately, if we want to have reliable information about these services over time, human beings need to talk to each other. Open Referral is working to make sure that technology can facilitate and enhance these human interactions – rather than pretend like we can eliminate the need for it, which might ultimately get in the way or even cause harm.
We are not! The idea of a Community Information Exchange sounds great, but as we understand it, a CIE is infrastructure that facilitates the exchange of clients’ personal information among health and human service providers. This is outside of our scope: Open Referral focuses specifically on information about service providers themselves – public information about organizations, not sensitive private information about people.
That said, we do have opinions about how communities might go about designing the process of Community Information Exchange infrastructure development. Feel free to reach out and talk about it!
Will Open Referral work with this or that resource referral platform that some organizations in our community are using?
Potentially yes! As an open standards and infrastructure initiative, Open Referral is platform-agnostic. We want to see a world in which there are many platforms that can all interoperate, so that people and organizations in communities can effectively and responsibly interact with each other across organizational and technological boundaries. If there’s a platform in your community, they might already use Open Referral to publish or consume resource data from other partners. If they don’t, maybe they should 🙂
Reach out to explore more opportunities.
When it comes to making decisions for the Initiative as a whole, simple rule of thumb is rough consensus and running code. We do things that 1) demonstrably work, and which 2) none of our community members find to be outright objectionable.
That said, because this is a complex problem (involving private and public sectors; spanning local, state, and federal boundaries, with many layers of technology that are rapidly changing) our approach to discovering solutions should also be complex. So the Open Referral Initiative entails various levels at which different kinds of decisions are being made by different kinds of participants. (One way to describe this is ‘polycentricity’ — read more about what we mean by that here.) We approach this challenge in an agile way. Rather than try to figure everything out up front, we instead work with stakeholders to identify specific steps that are worth taking in and of themselves, and which incrementally move us in the direction we want to go. We do some things, learn from them, and do more things.
This is driven by a kind of network-oriented ‘advice process’: before we do things, we ask those who will be impacted by a given decision, and those who are experts on the relevant subject, for advice. We synthesize their input and coordinate participation from a wide variety of participants, giving prerogative to our primary stakeholders (i.e. help seekers, service providers, researchers, and database administrators). These processes happen both at the ‘global’ level of Open Referral overall, and at the local level of pilot projects that implement these data exchange protocols — with local decisions being made autonomously by local stakeholders.
At the ‘global’ level, this process entails three activities: 1) a semi-regular Assembly video call, open to all participants [see an archive of these videos here], 2) convenings of diverse stakeholders in Open Referral workshops [read the reportback here], and 3) ad hoc ‘workgroups’ consisting of leaders with a varied set of perspectives and experiences [see the workgroup archive here]. Of all the feedback received from many different contributors, we assign priority to the perspectives of the lead stakeholders of our pilot projects. This feedback is submitted to Open Referral’s deputized technical leads, who ultimately make decisions with documentation and established methods for future review.
There are a number of different projects weaved together through the Open Referral Initiative — each with shared goals, but different funding sources.
Sometimes the Open Referral Initiative itself receives a grant or an award for our core operations. The Open Referral Initiative is fiscally sponsored by Aspiration, a 501c3 organization that supports open source nonprofit technology projects.
Sometimes the leadership of the initiative takes on consulting contracts through Open Referral Consulting Services, an LLC owned by Project Lead Greg Bloom. These contracts involve implementation of Open Referral’s protocols, development of related tools, and strategic facilitation of partnership development and business modeling.
Finally, in each of our pilot localities, we are supporting our lead stakeholders (i.e., health clinics, local I&Rs, etc) in their own fundraising efforts to build their internal capacity to participate in the project.
Do you have questions about our budgeting, or ideas about how to build more capacity for our work? Reach out to firstname.lastname@example.org
Open Referral is both the name of this community of practice, and also the shorthand name of our format for community resource data (which is technically known as the Human Services Data Specification).
Open211 is a name that has been used by various groups and organizations over the years. 2-1-1 Ontario has an ‘Open211’ API (the first of its kind, to our knowledge). In 2011, a Code for America fellowship team built an Open211 app (it did not achieve adoption, but we learned valuable lessons from it). One of the first pilot projects for Open Referral is a group of organizations and technologists in DC who describe themselves as ‘DC Open211.’ But it is more of a concept than a formal affiliation.
Ohana is both the name of the Code for America fellowship team that developed a ‘first draft’ of this model, and also the name of an API (Application Programming Interface) that the Ohana team developed in San Mateo county. The Ohana API was subsequently redeveloped by the Ohana team, based on feedback from the Open Referral community, to serve as the initial ‘reference implementation’ of the Open Referral model. [See the Ohana API on Github here.]
No. We recognize that this is a local problem that should entail local solutions. That’s why we’ve developed an open data standard, which can be used by any community to find locally-appropriate methods of data sharing.
No. Open Referral is not a database or a platform. We help other organizations evolve their resource databases into open platforms.
For what it’s worth, we reject the idea that community resource data can or should be treated like private property. It is public information, and organizations that do business with it should be recouping the costs of maintaining it by helping other organizations add value to it, thereby capturing some of that value. It’s a much more strategic and sustainable business model than trying to sell public information. See a report from Miami Open211 about how this might work.
We do recognize that there are some resource directory projects out there that are scraping 2-1-1s data. (Some of these projects are non-profit, or all-volunteer, or even for-profit.) Scraping this data from websites is usually technically easy, and it’s legally okay too. We disapprove of this, mostly because it makes it harder to have constructive conversations about the real problem — which is that this data is not currently “open” for machine-readable re-use. We also believe that if community resource directory data were openly accessible in a machine-readable format, ‘scraping’ would be pointless. Instead, people would use such data from its source, in ways from which the source can and should benefit.
Please note that Open Referral is not developing a product. We don’t manage or hold any resource data. Our primary objective is to help communities answer the question of who should maintain resource data. We’d be glad to help you answer that question for your community.
Of course they are!
We think it’s important for this information to be accessible to a whole ecosystem of services, and for the foreseeable future, calling centers will be an essential component of a healthy ecosystem. That’s why it’s important for us to develop standards and open that can be shared throughout that ecosystem, so that there are a variety of ways to meet people’s needs.
There are existing standards among certified information-and-referral systems, but these are not designed for the open exchange of data among any system.
As a result, various organizations and institutions must all independently invest in data production and technology development. This entails many missed opportunities to share infrastructure, reuse code and data, and achieve substantial cost savings.
That said, the work of developing a ‘universal data exchange schema’ does not entail starting from scratch. Rather, it entails aligning with that which already exists.
For the first phase of Open Referral, we have identified a core set of existing standards with which our format (technically known as the Human Services Data Specification) aim to be interoperable (i.e. data can be coherently translated between these formats).
Specifically, existing standards with which Open Referral is interoperable or can become interoperable include:
- Alliance of Information and Referral Systems’ XSD and the AIRS/2-1-1 Taxonomy
- The W3C’s civic services schema, proposed to the W3C through Schema.org
- The human services domain of the National Information Exchange Model (NIEM)
- FHIR’s HealthcareService resource
Open Referral recognizes the existence of a diverse array of taxonomies that are used to describe types of services, organizations, and people for whom services are available. Given that such categories are inherently subjective – whereas Open Referral’s Human Services Data Specification only describes factual data – we do not prescribe a specific taxonomy.
Instead, our data format specifies a way to include any taxonomy in open data, and our API protocols offer methods for supporting multiple taxonomies for use in the same data set.
Several prominent taxonomies, such as the 2-1-1 LA Taxonomy, are prohibited from public use by intellectual property claims. This poses barriers to the accessibility of community resource data. We are committed to seek solutions that can sustainably and responsibly remove barriers to the widespread use of these important classification tools.
We’d welcome opportunities to discuss the prospect of “opening up” any given proprietary service taxonomy, as we believe such a taxonomy could be more widely used, more easily maintained, and more financially sustainable by becoming open access infrastructure. Please reach out to discuss.
Well, we don’t yet know the right solution! We’re just not going to wait around any longer for it to be figured out. So we’re taking action.
Essentially, we are asking: how should this data be open?
This is a wicked problem that requires a lot of different people working together to learn about possible solutions. We believe that the best ways to address wicked problems tend to emerge from the insight and creativity of those who directly experience the problems. So our prerogative is to promote the perspectives and involvement of the true stakeholders — people who have experience seeking help from services, service providers themselves, data administrators at community-based organizations, community health researchers, etc. They’re the ones who will be best able to recognize what viable solutions really look like.
We’ve identified four primary types of use that are relevant to this domain. Read more here for full personas and user stories.
- Seeking help (service users, clients, etc)
- Providing help (service providers, i.e. anyone helping someone find information about services)
- Administering data (anyone engaged in working with community resource data and the technical systems that use it)
- Research (anyone trying to analyze resource data to better understand the allocation of resources in a community).
Through these distinct perspectives, we set the parameters of our research, design, and evaluation. Our format (and the associated tools) should meet all of their needs.
Obviously, ‘help seekers’ are the ultimate stakeholders, and we should consider our work first and foremost from their perspective. Aside from this premise, we do not prioritize one stakeholder’s needs above another.
However, we do have a specific tactical analysis that guides our work:
We believe that the most immediate and urgent objective is to improve the ability for all kinds of service providers to make effective referrals with accurate information. One of our core hypotheses is that if/when an ‘open system’ meets the needs of the service providers in its community, those service providers will play a critical role in maintaining the accuracy of its information.
Yet we also recognize that an increasingly common ‘use case’ is an individual searching the web themselves. Surely we want more of those self-performed searches to be effective. So, an open platform must achieve sustainability such that its information is readily findable through direct web searches. (Of course, even given success on both of those counts, we still assume there would remain a need for trained referral specialists — especially for complex situations, edge cases, etc.)
Finally, when it comes to actually adopting and using open data standards and platforms, we recognize that the most operational type of use is data administration. In other words, our format and tools must be readily usable by anyone who updates this information and manages the technology that stores and delivers it.
As adoption of open data standards makes it easier to solve the problems of maintaining resource directory information, we anticipate that it will become a lot easier for the many different types of service providers to allocate their resources as effectively as possible toward delivering and acting upon this information.
Open Referral is led by local pilot projects in which stakeholders take action towards establishing accessible, interoperable and reliable community resource directory data. Pilots commit to using the Open Referral data model to exchange resource directory data among institutions — and in return, their feedback is prioritized in shaping the iteration of that model. The goals of pilot projects include demonstrating short-term value of standardized/open data exchange, while developing a plan for long-term sustainability. [Learn more.]
The formation of a pilot project probably starts with a champion who has credibility in the community and the drive to convene others around an effort to solve this problem. We expect this champion to emerge from either local government, a community anchor institution or a local referral provider.
A fully-formed pilot project should include some combination of government, community anchor, and referral provider. It should have investment, and ideally active engagement, from local funding institutions that invest in safety net services. And a pilot should establish capacity for coordination that stands among these different institutional stakeholders — enabling each organization to identify and address its own needs, while facilitating a conversation about the collective interests in the community.
Interested in getting started in your community? Reach out to email@example.com
This is an open source initiative, by which we mean that anyone can freely participate in it and even adapt any of our content for their own purposes. There are lots of ways that you might be able to get involved. For example…
“I build software / do data magic / like helping open source technology projects.”
If you live in the area of one of our pilot projects, you can be very helpful indeed. If you don’t live nearby a pilot project, you might be able to help start one yourself. Attend a local Code for America brigade, or some other civic technology network activity, and ask around to see if anyone else is already working on projects involving resource data.
“I am an Information-and-referral provider.”
Read through our data specification, ask any questions that come to mind — and if we don’t know the answers, help us figure them out. Make suggestions for ways to improve the spec.
Even more importantly, identify your own needs: what do you want to see happen? In a world where community resource directory data could flow among systems, where would you want to see if flow? It can be quite valuable to simply scope out an actionable ‘use case’ (some specific action that would benefit some specific set of users).
“I work in health, human, and/or social services.”
You may be one of our most important kinds of participants. Our work only succeeds if it can help you better serve your clients. You can help us identify, scope, and implement a ‘use case,’ in which we facilitate an open data exchange that can improve the deliverability of your services and/or services in your community. Help us get there.
“I don’t code, I’m just a citizen and I want to help!”
There is LOTS of work to be done by people who don’t code! First, read through our documentation, and ask us questions about anything that’s unclear. Then, for example, you might start learning about how information about services gets collected in your community. Talk to the people who are already producing resource directories; see if they’re interested in finding new ways to produce and/or use this information. If so, write a summary of how they do their work and what they say that they need.
NOTE: The most powerful way to help may be to find and build relationships with a group of people who have all of the above experiences. If you can form a team in your community consisting of some combination of civic technologists, service providers, with support from local government and/or funders, we will help you launch a pilot Open Referral project!
Every community faces a similar challenge: there are many different kinds of health, human, and social services that are available to people in need, yet no one way that information about them is produced and shared. Instead, many organizations collect community resource directory data in redundant isolation from each other — yielding fragmented silos.
As a result, it’s hard to ‘see’ all the many different parts of our safety net. Many people never discover services that could help improve their lives. Service providers spend precious time verifying data rather than helping people. And without access to this information, decision-makers struggle to evaluate community health and program effectiveness. This yields underperforming systems that fail people and communities in tragic ways.
Government agencies are increasingly managing public information as open data, as the world wide web is expanding the use of data standards for civic information. These shifts toward standardized, open civic data make it easier to share data across heterogeneous information systems; to develop and redeploy new technologies at lower cost; and generally to increase the value of data, and the capacities of the networks and communities that use it.
For example, Schema.org (a consortium of search engines that develop standards for web markup) has developed a ‘civic services’ schema to the World Wide Web Consortium (the W3C). This schema can enable information about organizations, the services they provide, and the location of those services, to be indexed and delivered in more effective ways by platforms like Google, Bing, Yelp, etc.
Given this opportunity to make it easier for information about resources to be discovered on the web, Open Referral convened a network of referral providers, governments, funders, civic institutions, and technologists, and focused our efforts on a shared goal: let’s make it as easy as possible to publish, find, trust, and use resource directory information — in any number of ways. That entails the establishment of interoperability between emergent web platforms and conventional information-and-referral systems, and it also entails the development of new methods by which we can sustain the production of resource directory data as open data — a public good.
To make the most of this opportunity, the Open Referral Initiative has developed a data exchange format that can enable many information systems to share the same data. The Alliance of Information and Referral Systems has formally endorsed our protocols as industry standards for interoperable resource data exchange. This means it’s finally possible to break open these silos. Now we have to make it easy.
Open means ‘free,’ as in ‘free speech.’ We are all entitled to it by fundamental right.
Open means accessible. We have “open access” to things like roads and libraries — these are public goods, and anyone is able to use them. Likewise for our computer technology: open data can be accessed and used not just in one system, but any capable system.
Open does NOT necessarily mean ‘anything goes.’ Books have to be returned to the library, and in good condition. Roads have speed limits. You can’t yell ‘fire’ in a crowded theater. Etc.
Open does NOT necessarily mean ‘free’ as in without cost. Some roads have tolls; all roads need to be maintained. For something to exist in an open state, a lot of energy and resources must go into keeping it so. Those resources must come from somewhere (and, in the case of resource directory data, we don’t assume they will automagically crowdsource themselves).
‘Open’ can mean many things, but at its core, ‘open data’ entails:
Accessibility: open data is accessible as a “machine-readable” resource, meaning it can be ingested and displayed by computer programs, and presumably downloadable over the internet. (There can be reasonable reproduction costs associated with certain kinds of access to open data.)
Reuse and Redistribution: open data is provided under terms that permit reuse and redistribution, including the intermixing with other datasets (although open data can be licensed to prohibit changes or to require documentation of changes). There should be no discrimination against fields of endeavor or against persons or groups (although open data can be published with ‘dual licenses’ that specify different conditions for different uses).
Openness entails a state of possibility.
When it comes to public information, data becomes more valuable when more people use it. (Conversely, resource data is less valuable when fewer people use it.) When it’s easy to access and use resource data by any means, it becomes easier for more people to do more things with the data, and as more people do more things with the data, feedback on the quality of the data increases, data about the use of the data can be collected and analyzed — and the maintainers of the data become more critical to the entire ecosystem.
Open Referral’s core question is about how resource data can be sustainably maintained as an openly accessible public good.
Broadly speaking, people seek ‘referrals’ to resources that can help them meet their needs. Community resource data is comprised of information about health, human, and social services available to people in need — which organizations provide these services, and how they can be accessed.
Some services are provided by non-profit organizations, and other civic or cultural groups. Others are provided by local, state, even federal governments. All of these entities share information about their resources in different ways.
‘Information and referral’ refers to the field in which information about services is aggregated in community resource directories, and delivered (via referral) to people seeking help.
The Alliance of Information and Referral Systems (AIRS) is an accrediting agency that certifies ‘information and referral’ providers throughout North America. AIRS promotes official standards for ‘information and referral’ services, ranging from operational standards to data standards. AIRS has colleaborated with the Open Referral Initiative from its inception, and in 2018 AIRS formally endorsed our protocols as industry standards for interoperable resource data exchange.
By standards, we refer to common ways of doing things. In the case of data standards, that means an agreed-upon set of terms and relationships that define and structure information, so that it can be readily transferred between systems.
With such common agreements, different technologies can ‘speak’ to each other — making it easier to integrate systems, and develop, redeploy, and scale new tools.
For resource directory information providers, the development of standards means that resource data can be published once and accessed simultaneously in many ways. That’s how the internet became the World Wide Web.
Standardizing data across places and institutions also makes it easier to analyze and evaluate data, which makes it easier to understand patterns and trends — including, in the case of community resource data, the health of communities and the effectiveness of our safety net.
Furthermore, the process of developing standards helps to bring stakeholders together. By building a community among users, producers, and service providers, we can accelerate the process of learning and innovation towards our shared vision of helping people and improving the health of communities.
With increasing adoption of open standards for resource directory data, we anticipate:
- Decreased cost of data production (as data produced once can circulate through many systems)
- Improved quality of data (as more use generates more user feedback)
- Improved findability of data through web search and an ecosystem of tools and applications; Decreased cost and improved quality of new and redeployed tools (websites, applications, etc).
- Improved quality of referral services (as patterns of resource allocation shift from maintaining data to delivering data)
- Meaningful use of resource data for research purposes, such as community health assessment, and policy-making and resource allocation.
- Healthier people and more resilient communities.
A ‘platform’ is an ambiguous term that could mean a lot of different things — here we use it to refer to a system that connects producers and consumers, enabling them to conduct their own activities using external systems (which can be ‘built on’ the platform).
An API is an “application programming interface” which provides instructions for computer programs to interact with a database. For example, you can get a forecast from the National Weather Service by going to Weather.Gov. But the NWS also offers a web service (i.e. an API) that allows external applications to access the NWS database in real-time. This enables developers to build applications that connect to the NWS ‘platform’ in order to seamlessly provide public weather data to skiiers, photographers, rainbow chasers, etc. (To learn more about this example, see this segment from John Oliver’s Last Week Tonight about the National Weather Service and the importance of open platforms for public information.)
Platforms enable their data to be accessed and used in all kinds of ways, many of which would not or could not be provided by those who operate the platform themselves.
By ‘open platform,’ we specifically mean three things:
– A system that facilitates the management, publication, and access of open data
– A system powered by technology that is freely available through open licenses
– A system in which interoperability and integration are the primary design objectives
There are a number of factors that limit the reliability of organizations as sources of information about their own services:
Organizations might not designate the responsibility for managing all of this information to any single person. A single organization might offer many services through various programs at multiple locations. And these are often stressed environments with limited technical capacity and overburdened staff. It can be hard for organizations to keep track of all of their own services!
Organizations sometimes submit information about services that is vague or not entirely accurate. When updating their own records, organizations’ staff sometimes submit information that is composed to promote their organization in general, yet not precisely describe the information about services that is needed. This tends to yield information that is not useful to someone who is looking for a service.
Organizations are asked to update their information so many times in so many different community resource directories that they get confused or frustrated.
Keeping this information up to date just isn’t a high priority when organizations already have more clients coming through its doors than they can handle.
As a result, we assume that organizations’ self-reported updates should be considered one input among many in the effort to produce and maintain accurate data about services.
Government and funders do require various kinds of data about their grantees, but it’s generally non-standardized and not specifically about services themselves.
We anticipate that, as Open Referral becomes adopted by more institutions, governments and funders may begin mandating this information as a condition of funding. (But it doesn’t seem feasible to attempt to make that happen before a format has been demonstrated as viable and gaining adoption!)
About the Human Service Data Specification (8)
AKA ‘the Open Referral format,’ the HSDS is a data interchange format that enables resource directory data to be published in bulk for use by many systems. HSDS provides a common vocabulary for information about services, the organizations that provide them, and the locations where they can be accessed. HSDS is essentially an interlingua — in other words, it’s a common language that can be used by anyone to enable community resource directories to ’talk’ to each other. [See the data specification in Github or on our Documentation Site.]
We believe that development of an open, standardized format is a necessary step in a process of reducing the costs of producing directory data, increasing the quality of such data, and promoting its re-use in valuable ways.
We expect that adoption of HSDS will make it easier to find, use, and evaluate information about services. As a result, more people will access services and service providers will be better able to address complex needs; people will live happier and healthier lives; decision-makers will be better informed about the needs of their communities; and ultimately communities will become more resilient.
The Open Referral initiative used multiple methods of research and development to establish this data specification.
First, leaders of our pilot projects worked with stakeholders in their communities to develop a series of ‘user profiles’ that described the needs and behaviors of specific users.
Then, at our Open Referral workshop in the summer of 2014, we compiled a set of ‘user personas’ that each describe one of four broad categories of use: seeking help, providing help, analyzing data, and administering data. [Read here for the set of user personas developed through activities such as the Open Referral workshop.]
With this set of insights, we drafted an initial version of the specification that was then reviewed through several rounds of community feedback. During this time, members of our diverse network debated, clarified, and expanded the contents.
Finally, we conducted initial tests of HSDS by using it to transform resource directory databases from pilot projects around the country.
The primary users of HSDS are data administrators (who are responsible for managing systems that strive to meet the needs of other users). [Read our user personas here.]
We define ‘data administrator’ broadly: while some data admins will be sophisticated managers of enterprise-grade referral systems, the vast majority of people who produce resource directory data are working with simpler technology such as Access, Excel, or even Word. Our goal is for HSDS to be usable by both the 2-1-1 resource data specialist and the IT volunteer who is helping out the neighborhood food pantry.
First, HSDS identifies a vocabulary of terms that describe what a service is, the institution that provides it, where the service can be accessed, and how to access it. These terms are designated as ‘required,’ ‘recommended,’ and ‘optional.’ The spec provides instructions for formatting these terms, with examples.
On a more technical level, HSDS also includes a logical model that diagrams the relationships between these terms.
HSDS does not attempt to describe every type of information that might be relevant to people working with resource directory data. We have attempted to maintain a strict focus on specifying only relevant factual attributes that are shared by most services. That means we excluded many kinds of information that are unique to specific kinds of services (such as the accreditation of child care providers, or the availability of beds in a shelter).
HSDS also does not specify a taxonomy of types of services and types of personal attributes that determine eligibility for various types of services. Many such taxonomies already exist, so HSDS merely provides instructions for how to overlay a taxonomy of the user’s choosing. By default, information systems that use HSDS can use the open source Open Eligibility taxonomy. (Expect future cycles of the Open Referral initiative to take on these issues more directly; however, for now we are merely looking to learn from the different ways in which various users address these common problems.)
Finally, HSDS does not specify any information regarding how referrals actually get made (i.e. setting appointments, following up, etc) or feedback regarding the quality of those services. These kinds of information are critically important, but inherently so variable and context-dependent that we don’t think it’s feasible or appropriate to specify them at this point in time.
That said, this model can and should be extended! Users can expand HSDS to meet their own needs, in their own systems. Groups of stakeholders from particular subdomains can develop extended ‘profiles’ that are tailored to their situation. (A group of civil legal service providers have already begun working on precisely that.) In future iterations of the Open Referral process, these expansions will then be considered for inclusion as part of the primary model.
With the goal of broad accessibility in mind, the initial HSDS developer Sophia Parafina chose Comma-Separated Values (CSV) as the building blocks for HSDS. CSV serves as a ‘lowest common denominator’ that is simplest to use and most accessible to users with a modicum of technical abilities, as it can be edited in a simple text editor, and ingested by almost any information system. (For more reasoning behind this decision, consider Waldo Jaquith’s recent post, ‘In Praise of CSV.’)
For version 1.0, Parafina chose to accompany a more-complex set of CSV files with a JSON datapackage (using the Open Knowledge Foundation’s frictionless data specification) to describe the CSVs’ contents. In version 1.2, with support from Open Knowledge Foundation’s Frictionless Data Fund, Shelby Switzer upgraded the handling of JSON datapackages.
Members of the Open Referral community have observed that they may need more structured data formats for use cases that involve complex, sensitive, and/or large-scale uses. We recognize the validity of these perspectives, and in fact we expect the HSDS model to evolve over time. Pilot projects and community members are already discussing plans to develop complementary formats (such as XML and JSON-LD) — and as these formats are field-tested and validated, they may become formal components of HSDS in future iterations.
Our governance model is structured around three activities: 1) a semi-regular Assembly video call, open to all participants [see an archive of these videos here], 2) convenings of diverse stakeholders in Open Referral workshops [read the reportback here], and 3) ad hoc ‘workgroups’ consisting of leaders with a varied set of perspectives and experiences [see the workgroup archive here].
Of all the feedback received from many different contributors, we assign priority to the perspectives of the lead stakeholders of our pilot projects. This feedback is submitted to Open Referral’s deputized technical leads, who ultimately make decisions with documentation and established methods for future review.
[Open Referral’s initial governance model is described in more detail in this memo. You can also read more about the nature of this ‘polycentric’ approach to governance in Derek Coursen’s blog post here.]