Skip to main content
Geert-Jan Van Bussel
  • BPH, Rhijnspoorplein 1, 1091 GC Amsterdam
    ORCID: 0000-0002-3750-9682
  • +31 6 2115 6325
Veel overheden en bedrijven denken dat het met de nieuwe privacywet (GDPR) niet zo’n vaart zal lopen. Maar weten zij waar ze alle data bewaren? En wat ermee gebeurt? Het rondslingeren van data is een groot probleem, zeggen Hans Henseler... more
Veel overheden en bedrijven denken dat het met de nieuwe privacywet (GDPR) niet zo’n vaart zal lopen. Maar weten zij waar ze alle data bewaren? En wat ermee gebeurt? Het rondslingeren van data is een groot probleem, zeggen Hans Henseler en Geert-Jan van Bussel. Bedrijven en overheden hebben hun information governance en informatiewaardeketen niet onder controle
Archives are, more than ever, organizational and technological constructs, based on organizational demands, desires, and considerations influencing configuration, management, appraisal, and preservation. For that reason, they are, more... more
Archives are, more than ever, organizational and technological constructs, based on organizational demands, desires, and considerations influencing configuration, management, appraisal, and preservation. For that reason, they are, more than ever, distortions of reality, offering biased (and/or manipulated) images of the past and present an extremely simplified mirror of social reality. The information objects within that archive are (again: more than ever), fragile, manipulable, of disputable provenance, doubtful context, and uncertain quality. Their authenticity is in jeopardy.

The “Allure of Digital Archives” will be more about finding knowledge about the archive as a whole than about finding knowledge hidden in the information objects that are its constituents. It will be about determining the value of a digital archive as a “trusted” resource for historical research. To be successful in that endeavour, it will be necessary to assess the possibility to “reconstruct the past” of the digital archive. That assessment would allow historians to understand quality, provenance, context, content, and accessibility of the digital archive, not only in its design stage but also in its life cycle.

In this chapter, I present the theoretical framework of the “Archive–as–Is” as an instrument for such an assessment. It is possible for historians to use this framework as a declarative model for the way archives have been designed, configured, managed, and maintained. It will allow historians to understand why archives are as they are, and why records are part of it (or not). Using the framework, historians can determine the research value of a digital archive as a historical resource.
Este trabajo tiene como objetivo encontrar una base teórica viable para la gestión de la información empresarial (EIM) en un Mundo 2.0. El entorno “Archive-as- Is” es una teoría archivística dirigida a la organización. Es un modelo para... more
Este trabajo tiene como objetivo encontrar una base teórica viable para la gestión de la información empresarial (EIM) en un Mundo 2.0. El entorno “Archive-as- Is” es una teoría archivística dirigida a la organización. Es un modelo para entender el archivo “tal cual”, cómo se diseñó, construyó, procesó, manipuló y administró, y cómo “creció” para constituir el archivo que la organización que lo generó quería que fuera. Desde el momento de su creación, los archivos son distorsiones de la realidad, solo presentan imágenes sesgadas del pasado debido a la forma en que las organizaciones (y las personas) se “comportan”. La contextualización (de los archiveros) será crucial para “corregir” la distorsión. El desafío es garantizar que el archivo se pueda utilizar como un recurso “confiable” y se administre de tal manera que una organización pueda sobrevivir a los desafíos de Mundo 2.0. El marco de actuación del “Archive-as-Is” podría utilizarse para conseguirlo.

This paper has the objective of finding a viable theoretical foundation for Enterprise Information Management (EIM) in World 2.0. The framework of the “Archive-as-Is” is an organization-oriented archival theory. The framework is a declarative model for understanding the archive “as is”, how it has been designed, constructed, processed, manipulated, and managed, and how it has “grown” to be the archive that the organization that generated it, wanted it to be. From the mo-ment of their creation, archives are distortions of reality, only presenting biased images of the past due to the way organizations (and the people) “behave”. Contextualizing (by archivists) will be crucial to “correct” that distortion as much as is possible. The challenge in World 2.0 is to ensure that the organizational archive can be used as a “trusted” resource and be managed in such a way that an organization can survive the challenges of World 2.0. The theoretical framework of the “Archive-as-Is” may be the model that could be used to realize just that.
The abundance of (structured and unstructured) information objects leads to organizational challenges. To facilitate fail-proof information management guaranteeing accountability, compliance, and security is by no means new. Until a few... more
The abundance of (structured and unstructured) information objects leads to organizational challenges. To facilitate fail-proof information management guaranteeing accountability, compliance, and security is by no means new. Until a few years ago, organizations captured and controlled information objects in an infrastructure that did not cross the borders of the organizational structure. If accountability, compliance, security, or other business-related issues arose, there was a single ‘point of control’ defined. That ‘point of control’ became diffused with the ongoing integration of business processes between different organizations, stimulated by sharing information objects through (for instance) social media and the breakthrough of supply chain and ERP systems causing information integration. As it became common practice to share information objects between different parties, it could become difficult to ascertain which of the integrated process owners was responsible for accountability, compliance, security, or information accessibility. It is proving challenging for traditional ways, methods and technologies to achieve the expected information quality, compliance and information governance. Guaranteeing an accountable, compliant, transparent, and effectively performing organization in a dynamically changing ICT environment, recognizing both structured and unstructured information objects, is problematic. EIM’s focus is changing to incorporate unstructured information objects, but lacks the theoretical foundation to do so effectively. The key for such a theoretical foundation for EIM may be ‘the archive’ . For defining business strategies, Smith and Steadman (1981) already acknowledged organizational archives as crucial resources. They are very important for organizational accountability, business process performance, and reaching business objectives. For EIM to find a theoretical foundation based on records and archives, only archival science seems to offer applicable, encompassing theoretical frameworks.
In this part, I will extensively discuss the theoretical framework of the ‘Archive-as-Is’. I developed the theory as a pragmatic view on archives and records, their genesis, construction, use, and continuous management. The... more
In this part, I will extensively discuss the theoretical framework of the ‘Archive-as-Is’. I developed the theory as a pragmatic view on archives and records, their genesis, construction, use, and continuous management. The ‘Archive-as-Is’ is a declarative model for understanding the archive of an organization (or organizational chain), how it has been designed, created, processed, manipulated, and managed as a valuable business resource. This framework explains how the archive has ‘grown’ to be the archive that the organization or the person that generated it, wants it to be (in short: the ‘Archive-as-Is’). An overview of the conceptual background of the theoretical framework will follow this introduction. After that I will elaborate on the assumptions on which the theoretical framework is based, followed with a graphical model of the framework. The next part will be an in-depth discussion of all components of the framework. This part of the article will be concluded with several concluding remarks, and remarks about further research
More than 80 % of all information in an organization is unstructured, created by knowledge workers engaged in peer-to-peer networks of expertise to share knowledge across organizational boundaries. Enterprise Information Systems (EIS) do... more
More than 80 % of all information in an organization is unstructured, created by knowledge workers engaged in peer-to-peer networks of expertise to share knowledge across organizational boundaries. Enterprise Information Systems (EIS) do not integrate unstructured information. At best, they integrate links to unstructured information connected with
structured information in their databases. The amount of unstructured information is rising quickly. Ensuring the quality of this unstructured information is difficult. It is often inaccessible, unavailable, incomplete, irrelevant, untimely, inaccurate, and/or incomprehensible. It becomes problematic to reconstruct what has happened in organizations. When used for organizational policies, decisions, products, actions and transactions, structured and unstructured information are called records. They are an entity of information, consisting out of an information object (structured or unstructured) and its metadata. They are important for organizational accountability and business process performance, for without them reconstruction of past happenings and meaningful production become an impossibility. Organization-wide management of
records is not a common functionality for EIS, resulting in [1] a fragmentation in the management of records, where structured and unstructured information objects are stored in a variety of systems, unconnected with their metadata; [2] a fragmentation in metadata management, leading to a loss of contextuality because metadata are separated from their information objects; and [3] a declining quality or records, because their provenance, integrity, and preservation are in peril. Organizational accountability is based on records and their context to reconstruct the past. Because records are not controlled by EIS, they can only marginally be used for accountability. The challenge for organizational accountability is to generate trusted records, fixed and contextual information objects inseparately linked with metadata that capture context
to regain evidential value and to allow for the reconstruction of the past. The research question of this paper is how to capture records and their context within EIS to regain the evidential value of records to allow for a more robust organizational accountability. To find an answer, we need to pay attention to the concept of context, on how to capture context in metadata, and how to embed and manage records in EIS.
Research Interests:
According to technology pundits, we are at the brink of an information (technology) revolution. Within a decade, the “Internet of Things” (IoT, the interconnection of uniquely identifiable devices within the Internet infrastructure... more
According to technology pundits, we are at the brink of an information (technology) revolution. Within a decade, the “Internet of Things” (IoT, the interconnection of uniquely identifiable devices within the Internet infrastructure (Holler, et al., 2014), will generate huge amounts of digital data. This data may be applied to manage the urban environments in which the majority of the population of the world is living. Those urban environments will be turned into ‘smart cities’. The subject of the smart city is discussed extensively within scientific and political communities. Most attention is paid to the new and exciting possibilities that integrated information technology systems (ICTs) have to offer to citizens of these smart cities (Townsend, 2013). What is less discussed is the process of information management (IM) that is instrumental to the application of the ICTs within a smart city. The huge amounts of data necessary to manage a smart city require IM models that match the unprecedented scale of data processing that is required. This is highly relevant, because it is acknowledged in literature that the societal impact of this scale of data processing cannot be predicted (Mayer-Schönberger, Cukier, 2013). Proper attention to the IM issues that will emerge as smart cities are implemented is therefore highly relevant. In this chapter we will be exploring smart cities: those cities that succeed in the application of ICTs at a practical level and harvest the benefits of the IoT. We will discuss the application of ICTs and look at the aspects of digital data that are relevant in the ‘information value chain’ (IVC) that is being executed. We will follow the flow of data from the initial cue that starts the process, as picked up by sensors, the interfaces that provide interaction between the city and the individual citizen, and the intelligence behind the screens that is responsible for the delivery of applicable information to the citizen. We will start with a short sketch of ideas and ideals that underlie the smart city; followed by discussion of building blocks - sensors, screens and actuators - that enable the city environment to interact with the citizen on the street. After that, we delve into the IVC, following the path data takes along that chain in the course of its value creation for the city.
In this paper we explore the extent to which privacy enhancing technologies (PETs) could be effective in providing privacy to citizens. Rapid development of ubiquitous computing and 'the internet of things' are leading to Big Data and the... more
In this paper we explore the extent to which privacy enhancing technologies (PETs) could be effective in providing privacy to citizens. Rapid development of ubiquitous computing and 'the internet of things' are leading to Big Data and the application of Predictive Analytics, effectively merging the real world with cyberspace. The power of information technology is increasingly used to provide personalised services to citizens, leading to the availability of huge amounts of sensitive data about individuals, with potential and actual privacy-eroding effects. To protect the private sphere, deemed essential in a state of law, information and communication systems (ICTs) should meet the requirements laid down in numerous privacy regulations. Sensitive personal information may be captured by organizations, provided that the person providing the information consents to the information being gathered, and may only be used for the express purpose the information was gathered for. Any other use of information about persons without their consent is prohibited by law; notwithstanding legal exceptions. If regulations are properly translated into written code, they will be part of the outcomes of an ICT, and that ICT will therefore be privacy compliant. We conclude that privacy compliance in the 'technological' sense cannot meet citizens' concerns completely, and should therefore be augmented by a conceptual model to make privacy impact assessments at the level of citizens' lives possible.
According to Johnson & Grandison (2007), failure to safeguard privacy of users of services provided by private and governmental organisations, leaves individuals with the risk of exposure to a number of undesirable effects of information... more
According to Johnson & Grandison (2007), failure to safeguard privacy of users of services provided by private and governmental organisations, leaves individuals with the risk of exposure to a number of undesirable effects of information processing. Loss of control over information about a person may lead to fraud, identity theft, reputation damage, and may cause psychosocial consequences ranging from mild irritation, unease, social exclusion, physical harm or even, in extreme cases, death. Although pooh-poohed upon by some opinion leaders from search engine and ICT industries for over a decade, the debate in the wake of events like the tragic case of Amanda Todd could be interpreted as supporting a case for proper attention to citizens’ privacy. Truth be told, for a balanced discussion on privacy in the age of Facebook one should not turn towards the social media environment that seems to hail any new development in big data
analysis and profiling-based marketing as a breathtaking innovation. If the myopic view of technology pundits is put aside, a remarkably lively debate on privacy and related issues may be discerned in both media and scientific communities alike. A quick keyword search on ‘privacy’, limited to the years 2000-2015, yields huge numbers of publications: Worldcat lists 19,240; Sciencedirect 52,566, IEEE explore 71,684 and Google scholar a staggering 1,880,000. This makes clear that privacy is still a concept considered relevant by both the general public and academic and professional audiences. Quite impressive for a sub-
ject area that has been declared ‘dead’.
This paper is resulting from a research project at Tilburg University, the Netherlands, in which the fields of organization, information and archival studies have been integrated. We argue that the archivistic concept of the record... more
This paper is resulting from a research project at Tilburg University, the Netherlands, in which the fields of organization, information and archival studies have been integrated. We argue that the archivistic concept of the record keeping system, in information-intensive organizations, can be used as an instrument for improving the performance of the document-flow in a business process, and, as a result, on the process. Archival documents must not only contain the information related to the result of an activity, but also information both on the circumstances of their creation and organization and business processes. We think this can be realized by using a record keeping system, with as essential elements: context, quality, appraisal, warehousing and logistics. The instrument we have developed as a translation of the conceptual model is the process-specific archival document-file, a meta-file that, after translation, operates as an engine managing document management. The approach that was developed was tested in a case-study in the city of Veldhoven. From the case-study, it became clear that our approach leads to considerable improvements in the flow of documents and thus in the primary processes supported by these documents.
De informatiehuishouding van een organisatie kent twee functies, de informatie- en de verantwoordingsfunctie. Met behulp van enkele ‘real-life’ casussen tonen we het belang van deze functies aan voor het bereiken van performance van... more
De informatiehuishouding van een organisatie kent twee functies, de informatie- en de verantwoordingsfunctie. Met behulp van enkele ‘real-life’ casussen tonen we het belang van deze functies aan voor het bereiken van performance van bedrijfsprocessen. In deze casussen wordt duidelijk dat om deze functies te kunnen realiseren een ‘organizational memory’ nodig is. Om de kwaliteit van dit ‘memory’ te waarborgen is, zo zullen wij laten zien, content auditing essentieel.
The paper results from a research project at Tilburg University in which organization, information and archival studies have been integrated. We argue that the archivistic concept of the record keeping system (RKS) can be used as an... more
The paper results from a research project at Tilburg University in which organization, information and archival studies have been integrated. We argue that the archivistic concept of the record keeping system (RKS) can be used as an instrument for improving the performance of the document flow in a business process, and as a result of the process. Archival documents contain information related to the result of an activity, to the circumstances of their creation and to organization and business processes. The elements of a RKS are context, quality, appraisal, warehousing and logistics. The translation of our conceptual model is the process-specific archival document-file, a meta-file that operates as an engine managing document management. The approach was tested in a case study. It became clear that our approach leads to considerable improvements in the flow of documents and thus in the primary processes supported by these documents.
According to technology pundits, we are at the brink of an information (technology) revolution. Within a decade, the “Internet of Things” (IoT, the interconnection of uniquely identifiable devices within the Internet infrastructure... more
According to technology pundits, we are at the brink of an information (technology) revolution. Within a decade, the “Internet of Things” (IoT, the interconnection of uniquely identifiable devices within the Internet infrastructure (Holler, et al., 2014), will generate huge amounts of digital data. This data may be applied to manage the urban environments in which the majority of the population of the world is living. Those urban environments will be turned into ‘smart cities’. The subject of the smart city is discussed extensively within scientific and political communities. Most attention is paid to the new and exciting possibilities that integrated information technology systems (ICTs) have to offer to citizens of these smart cities (Townsend, 2013). What is less discussed is the process of information management (IM) that is instrumental to the application of the ICTs within a smart city. The huge amounts of data necessary to manage a smart city require IM models that match the unprecedented scale of data processing that is required. This is highly relevant, because it is acknowledged in literature that the societal impact of this scale of data processing cannot be predicted (Mayer-Schönberger, Cukier, 2013). Proper attention to the IM issues that will emerge as smart cities are implemented is therefore highly relevant. In this chapter we will be exploring smart cities: those cities that succeed in the application of ICTs at a practical level and harvest the benefits of the IoT. We will discuss the application of ICTs and look at the aspects of digital data that are relevant in the ‘information value chain’ (IVC) that is being executed. We will follow the flow of data from the initial cue that starts the process, as picked up by sensors, the interfaces that provide interaction between the city and the individual citizen, and the intelligence behind the screens that is responsible for the delivery of applicable information to the citizen. We will start with a short sketch of ideas and ideals that underlie the smart city; followed by discussion of building blocks - sensors, screens and actuators - that enable the city environment to interact with the citizen on the street. After that, we delve into the IVC, following the path data takes along that chain in the course of its value creation for the city.
The world is changing rapidly. It is becoming an increasingly information-rich and information-dependent platform. Information is easily and (mostly) automatically recorded and stored to be accessed and retrieved on a later date. ICTs... more
The world is changing rapidly. It is becoming an increasingly information-rich and information-dependent platform. Information is easily and (mostly) automatically recorded and stored to be accessed and retrieved on a later date. ICTs contribute to a (seemingly) inescapable loss of privacy, because this information is processed without knowledge or consent from individual people. Companies are building new ecosystems online, and are building online shops, communities, user groups, and other ways to promote their products. The economy is developing into a digitized economy. All boundaries between the virtual and the real worlds are blurring. The digital universe is expanding in unprecedented ways. But there is so much information generated, stored, and used, that its accessibility is in jeopardy, because the possibilities to identify information are becoming more difficult. To protect privacy and to enhance accessibility, the global legal frameworks are expanding, creating problems in implementing compliance frameworks for public and private organizations alike. Organizational accountability is dependent on accessible information. Public expectations do want objectives as transparency, privacy, due process, compliance, and security of organizational information implemented within legal frameworks. Not meeting those objectives is extremely 'bad for business'. For realizing information access, archiving is extremely important. Archiving is managing information over time using the 'information value chain' to guarantee the four dimensions of information (quality, context, relevance, and survival). It is quite surprising that there is almost no research done about the relationship between information accessibility, archiving, and the public demand for organizational accountability. For eGovernment to succeed, those three subjects are of vital importance .
Research Interests:
The external expectations of organizational accountability force organizational leaders to find solutions and answers in organizational (and information) governance to assuage the feelings of doubt and unease about the behaviour of the... more
The external expectations of organizational accountability force organizational leaders to find solutions and answers in organizational (and information) governance to assuage the feelings of doubt and unease about the behaviour of the organization and its employees that continuously seem to be expressed in the organizational environment.

Organizational leaders have to align the interests of their share– and stakeholders in finding a balance between performance and accountability, individual and collective ethical approaches, and business ethics based on compliance, based on integrity, or both. They have to integrate accountability in organizational governance based on a strategy that defines boundaries for rules and routines. They need to define authority structures and find ways to control the behaviour of their employees, without being very restrictive and coercive. They have to implement accountability structures in organizational interactions that are extremely complex, nonlinear, and dynamic, in which (mostly informal) relational networks of employees traverse formal structures.

Formal processes, rules, and regulations, used for control and compliance, cannot handle such environments, continuously in ‘social flux’, unpredictable, unstable, and (largely) unmanageable. It is a challenging task that asks exceptional management skills from organizational leaders. The external expectations of accountability cannot be neglected, even if it is not always clear what is exactly meant with that concept.

Why is this (very old) concept still of importance for modern organizations?

In this book, organizational governance, information governance, and accountability are the core subjects, just like the relationship between them. A framework is presented of twelve manifestations of organizational accountability the every organization had to deal with. An approach is introduced for strategically govern organizational accountability with three components: behaviour, accountability, and external assessments.

The core propositions in this book are that without paying strategic attention to the behaviour of employees and managers and to information governance and management, it will be extremely difficult for organizational leaders to find a balance between the two objectives of organizational governance: performance and accountability.
In 2017, I introduced a new theoretical framework in Archival Science, that of the ‘Archive–as–Is’. This framework proposes a theoretical foundation for Enterprise Information Management (EIM) in World 2.0, the virtual, interactive, and... more
In 2017, I introduced a new theoretical framework in Archival Science, that of the ‘Archive–as–Is’. This framework proposes a theoretical foundation for Enterprise Information Management (EIM) in World 2.0, the virtual, interactive, and hyper connected platform that is developing around us. This framework should allow EIM to end the existing ‘information chaos’, to computerize information management, to improve the organizational ability to reach business objectives, and to define business strategies. The concepts of records and archives are crucial for those endeavours. The framework of the ‘Archive–as–Is’ is an organization–oriented archival theory, consisting of five components, namely: [1] four dimensions of information, [2] two archival principles, [3] five requirements of information accessibility, [4] the information value chain; and [5] organizational behaviour. In this paper, the subject of research is component 5 of the framework: organizational behaviour. Behaviour of employees (including archivists) is one of the most complicated aspects within organizations when creating, processing, managing, and preserving information, records, and archives. There is an almost universal ‘sound of silence’ in scholarly literature from archival and information studies although this subject and its effects on information management are studied extensively in many other disciplines, like psychology, sociology, anthropology, and organization science. In this paper, I want to study how and why employees behave as they do when they are working with records and archives and how EIM is influenced by this behaviour.