Behavioral Sciences of Terrorism and Political Aggression

Behavioral Sciences of Terrorism and Political Aggression

ISSN: 1943-4472 (Print) 1943-4480 (Online) Journal homepage:

Countering terrorism, protecting critical national infrastructure and infrastructure assets through the use of novel behavioral biometrics

Priyanka Chaurasia, Pratheepan Yogarajah, Joan Condell, Girijesh Prasad, David McIlhatton & Rachel Monaghan

To cite this article: Priyanka Chaurasia, Pratheepan Yogarajah, Joan Condell, Girijesh Prasad, David McIlhatton & Rachel Monaghan (2016) Countering terrorism, protecting critical national infrastructure and infrastructure assets through the use of novel behavioral biometrics, Behavioral Sciences of Terrorism and Political Aggression, 8:3, 197-211, DOI: 10.1080/19434472.2016.1146788

To link to this article:

Full Terms & Conditions of access and use can be found at


Download by: [Lund University Libraries] Date: 04 October 2017, At: 14:29


Countering terrorism, protecting critical national infrastructure and infrastructure assets through the use of novel behavioral biometrics

Priyanka Chaurasiaa, Pratheepan Yogarajaha, Joan Condella, Girijesh Prasada, David McIlhattonb and Rachel Monaghanc

aSchool of Computing and Intelligent Systems, Ulster University, Londonderry, UK; bSchool of Built Environment, Ulster University, Newtownabbey, UK; cSchool of Criminology, Politics and Social Policy, Ulster University, Newtownabbey, UK


Since 9/11, there has been an increased interest in the use of biometric technologies as a way to counter the threat of terrorism and to protect a nation’s critical national infrastructure and infrastructure assets. Biometric features can be used to verify an individual’s identity or in the identification of an individual. Such features can be divided into two types, namely physical and behavioral characteristics. Physical biometrics are concerned with direct measurements of an individual’s body and would include fingerprints, facial geometry and iris patterns. In contrast, behavioral biometrics are concerned with the indirect measurements of an individual, namely their patterns of behavior such as gait, voice and keystroke dynamics. Thus, each individual has a set of unique behavioral biometric features, which they usually adhere to under normal conditions. Based on this concept it is then possible to create different behavioral biometric profiles for each individual that can distinguish one individual from another. In this paper, we propose a video-based security system using style of hand waving as a novel behavioral biometric feature for individual identification.

Received 3 December 2015

Behavioral biometrics, biometrics, counter-terrorism, critical national infrastacture, hand waving, infrastructure assets


The national infrastructure of any given country involves ‘those facilities, systems, sites and networks necessary for the functioning of the country and the delivery of the essential ser- vices upon which daily life ... depends’ (Cabinet Office, 2010, p. 7). Within the UK, the national infrastructure is categorized into 13 sectors, namely communications, emergency services, energy, financial services, food, government, health, transport, water, defense, civil nuclear, space and chemicals (Centre for the Protection of National Infrastructure [CPNI], 2016). Accordingly, within each of these 13 sectors there exist critical elements whose loss or compromise would detrimentally affect either the provision or reliability of essential services thereby leading to severe social or economic consequences including loss of life (CPNI, 2016; The Scottish Government, 2011). Such elements are termed critical national infrastructure (CNI) and individually infrastructure assets. Infrastructure assets may be physical such as sites and installations or electronic including networks or systems (Cabinet Office, 2010). The Department of Homeland Security (2015) identifies 16 critical infrastructure sectors, 12 of which correspond to the UK’s national infrastructure categorization plus commercial, critical manufacturing, dams and information technology.

CNI sectors and the infrastructure assets located within them have increasingly become an attractive target for terrorists. For example, power substations have been attacked near San Jose, California and in Ballyshannon, Ireland (CAIN, 2015; Ferris, 2014), water supplies have been adulterated in Turkey by the Kurdish Workers’ Party (PKK) and the Chingaza Dam bombed by the Revolutionary Armed Forces of Colombia (FARC) (for more details see Gleick, 2006). Subsequently, infrastructure assets require sophisticated security mech- anisms in order to mitigate the potential threat from terrorism. Such threats can manifest themselves in many different ways and as a consequence, there is a significant need for robust solutions that can holistically work toward protecting the CNI. In addition, to the attacks outlined above, potential threats may include vehicle and person borne impro- vised explosive devices (Young, 2015), cyber-attacks (Blakemore, 2012) chemical, biologi- cal, radioactive and nuclear incidents (Ellis, 2014) as well as insider threats (Zegart, 2015). The response to these threats has resulted in advancements in sophistication of security procedures and systems, which have been driven by a rapid change in technology. Much of this technological change has focused on the protection of CNI from cyber- attacks (Cornish, Livingstone, Clemente, & Yorke, 2011), enhancing the physical protection of infrastructure assets through intelligent surveillance methods (Young, 2015), as well as the rapid evolution and utilization of biometric measures for personal identification and verification (Di Nardo, 2009). The focus of this research is on providing an innovative approach to the latter.

The utilization of biometrics as a method for countering the threat of terrorism has gained momentum in recent times as it can be used for both confirming the identity (bio- metric authentication/verification) of a person as well as the identification of an individual (biometric identification) (Chaurasia et al., 2015). Biometric features can be classified as either physical biometrics or as behavioral biometrics (Jain, Bolle, & Pankanti, 1999). In the context of physical biometrics, examples include fingerprints, iris, hand geometry and facial patterns (Jain et al., 1999). In contrast, behavioral biometrics are concerned with the indirect measurements of the human features that are analyzed by studying the behavior of an individual while performing certain tasks (Wang & Geng, 2010). Indeed, behavioral biometrics provide the body information of an individual and is often referred to as their body signature or body language (Yogarajah, Condell, & Prasad, 2010). Traditionally, most security systems have used physical biometrics (such as facial and fingerprint recognition) for the identification of individuals despite having numerous limitations, which ultimately impact upon their effectiveness (Jain, Nandaku- mar, & Nagar, 2008). In the case of facial recognition, high-resolution images are required and in many cases, these can only be obtained when the person is close to the camera and when homogeneous lighting conditions are available (Pankanti & Jain, 2008). In addition, both facial and fingerprint recognition requires the subjects’ consent and co-operation for identification. These limitations are furthered through evidence in the current literature base that physical biometrics can be manipulated and therefore rendered potentially redundant. Examples of manipulation are presented by Maltoni, Maio, Jain, and Prabhakar (2009) and Marasco and Ross (2014) where spoofing such as latent fingerprints and liquid silicon rubber fingerprint molds are used to deceive the physical biometric systems being used.

In contrast, behavioral biometrics attempt to quantify behavioral traits exhibited by users and utilizes the resulting feature profiles that have been extracted from the bio- metric method to successfully verify identity (Brömme, 2003). Traditionally, verification of individuals in many CNI sites was achieved through the utilization of passwords or tokens (e.g. swipe cards), however, these can be lost as well as potentially hacked and therefore have potential limitations (Bonneau, Herley, van Oorschot, & Stajano, 2012). The main benefit of using a biometric authentication/verification feature instead of a phys- ical token or password is that such biometrics cannot easily be lost, stolen, hacked, dupli- cated or shared. They are also resistant to social engineering attacks and since users are required to be present to use a biometric feature, it can also prevent unethical employees from repudiating responsibility for their actions by claiming an imposter had logged on using their authentication credentials when they were not present. To use a biometric system, it is first necessary for each user to enroll by providing one or more samples of the biometric in question such as a fingerprint, which is used to make a ‘template’ of that biometric (Jain et al., 1999). When a user attempts to authenticate, the biometric they provide is then compared with their stored template. The system then assesses whether the sample is similar enough to the template to be judged to be a match.

Behavioral biometrics measure the distinct actions that humans take which are generally difficult to imitate and thus resilient to falsification (Vacca, 2007). Therefore, recently deployed security systems operate using multi-biometric techniques and thus combine physical biometrics, for example, fingerprint and iris along with the behavioral biometrics such as lip movement and voice for person identification (Pankanti & Jain, 2008). Currently, multi-biometric security systems perform well under the assumption that a person co-oper- ates with the system by touching the device or appearing very close to the system if required (Yogarajah et al., 2010). Research has found that in general people are wary of these types of security systems and often do not accept to directly touch the electronic equipment concerned. For example, older people are not user friendly to such technologies (Augusto, 2005; Augusto & McCullagh, 2007; Hernández-Encuentra, Pousada, & Gómez- Zúñiga, 2009). This makes current multi-biometric security systems less practical (Pankanti & Jain, 2008). Additionally, when cameras alone are deployed for security, the only possi- bility left is to use the body language information that could be captured through videos.

The human body language covers a range of sub-languages (Yogarajah, Prasad, & Condell, 2009). These sub-languages can be categorized as either small-scale or large- scale body language (Yogarajah et al., 2009). Small-scale body language include hand- gesture and facial expression, whereas large-scale body language comprises of a complete human action such as hand waving, walking and running. A video-based intelligent secur- ity system should be able to identify the person from their style of action/actions, which define their body language information. Examples of security systems based on large-scale body language are systems that can detect suspicious or unknown people entering restricted areas or an interface robot that works as an interface for taking known user’s commands and providing results.

In this paper, we propose a methodology for a security and surveillance-based system intended to be installed in security critical areas, where the access is provided to the user based on specific actions performed by said user. The work presented here can be sum- marized as to increase the robustness of a survelliance system that works for the following scenario, although not limited to that outlined in Figure 1 whereby an employee in a secur- ity critical area in order to gain access requires identification using a behavioral biometric. If the system wanted more information for identity then it might ask the person to do a single action such as wave the hand or might ask to do a sequence of actions for further authentication (Boulgouris, Hatzinakos, & Plataniotis, 2005).

For such scenarios, we propose a video-based security system that captures the style of hand waving action as the body language information to be used in person identification. It is to be noted that in comparison to our previous work in this area, where the person was identified for surveillance systems (Chaurasia et al., 2015), in this work we specifically address the scenario of security systems, where the user is desired to do specific actions for gaining access to a secured area.

Action-based individual identification

Two different individuals perform the same action in a similar manner; however the approach of doing the same action for each individual cannot be identical as each individ- ual has unique behavioral characteristics. A model built using the behavioral characteristic features of different individuals can be used to identify individuals. Based on this approach, known individuals can be distinguished from unknown individuals. The problem then becomes one of recognizing instances of a given behavioral pattern from the learned model.

The major phases required for carrying out identification are feature extraction, feature matching and decision-making. Ideally the extracted feature set must be unique for each individual (i.e. extremely small inter-class similarity) and must also be invariant to the small changes in the behavioral characteristic collected for each individual (i.e. extremely small intra-class variability). The extracted feature set is stored as a set of templates in a data- base. In the matching phase, the extracted feature set from the query sample is matched to the set of templates of the matcher to find the similarity measure. In the decision phase, an individual is classified based on the degree of similarity between the query and the templates.

Figure 1. Smart surveillance door.

Hand waving action

A measurable technique for discrimination between individuals on their style of action is the analysis of how the individual ‘acts’ in specific situations. Based on this assumption, individual identification in this paper considers an individual’s style of hand waving through a set of appropriate action sequences. For the evaluation of this proposed approach, hand waving sequences from a publically available KTH action data set (Schuldt, Laptev, & Caputo, 2004) are used. The data set consists of six human actions (boxing, running, jogging, walking, hand clapping and hand waving) captured for 25 par- ticipants in outdoor and indoor environments. The sequences were captured with homo- geneous backgrounds using a frame rate of 25 frames per second (fps) using a static camera. Figure 2 shows a sample image frame from the KTH action data set for two indi- viduals, I1 and I2; showing their hand waving action.

Figure 2. Sample of image sequences for the style of hand waving action for individuals, I1 and I2 from the KTH action data set.

The video sequences in the data set are of varying length due to which there will be multiple periodic hand waving actions in each video. In such scenarios it would be difficult to compare the hand waving action across different videos and individuals. Additionally, there are two situations that need to be considered (1) the speed of doing the hand waving action by the same individual in two different videos will vary, and (2) the speed of doing the hand waving action will vary from one individual to another individual. As a consequence for both these situations, the subsequent number of frames for each video sequence will be of different length, resulting in ambiguity. Therefore as a bench- mark of similarity measure, it is necessary to determine one hand waving cycle in a given video and consider only those frames that fall in one complete motion information of the hand waving action. In our case for the proposed system to work, we require a minimum of one complete hand waving cycle. Hence, an individual’s one full hand waving sequence is determined by periodicity detection of the sequence. One complete periodic motion determined will give a hand waving cycle.

For obtaining a hand waving cycle it is required to determine the number of frames in one complete hand waving action. There can be two approaches for detecting the number of frames in one complete hand waving action (1) manual, and (2) automated. In the case of a manual approach, the periodicity is determined manually by an observer by cutting down the given video to one cycle of hand waving sequence and calculating the number of frames. However, this approach is time consuming and prone to errors as it is observer dependent. This approach will fail in cases where the given data set is large in size. As a solution to this problem, in this paper we propose an automated approach to periodicity detection and this is described next.

Automated periodicity detection approach

One complete hand waving action starts with the individual hand in the down position, the hand then goes up vertically and finally the hand does into a downward position. In a given video this action is repeated several times. In a proposed periodicity detection algorithm, the individual’s initial down position image frame is considered to calculate the reference binary image, R. Following the calculation of R, the periodicity detection is based on the similarity measure function of the motion detected binary consecutive image frames in the given sequence with respect to R:

Si =R·Bi. (1)

To apply the Equation (1) to the images, it is first necessary to convert the 2D image vectors to 1D vectors and compute the dot product. In the Equation (1) Si represents the similarity measure values and Bi is the motion detected binary image. The value of Bi is calculated as:

where, Bi (x, y) is the binary image representing regions of motion, fi(x, y) and fi+1(x, y) are the pixel intensity values at the location (x, y) in the frame i, i + 1 respectively, α. is the threshold value and i [ ( fs, fe−1) where fs and fe are the start (s) and the end (e) frames, respectively, from the given hand waving action sequence. The reference binary image R is calculated using Equation (2), where i is equal to s. Figures 3 and 4 show the similarity measure graphs for individuals I1 and I2. The x-axis and y-axis in Figures 3 and 4 represent the frame numbers and the corresponding similarity measure values with respect to the reference binary image R. Figures 3(a) and 4(a) rep- resent the corresponding gray scale image of R. The peak values in these graphs rep- resent a higher similarity measure of the motion detected binary consecutive image frames with respect to R.

The periodicity is detected as the number of frames between the two highest simi- larities measure values (shown as the horizontal lines at the bottom of Figures 3 and 4, respectively).

The periodicity is calculated as half of one complete hand waving action moving from the bottom to the top. For a given hand waving video sequence, the average value of the calculated periodicities is considered. The automated periodicity detection approach is compared to a manual approach of detecting the periodicity. Table 1 shows sample results as the number of frames obtained for automated periodicity detection in outdoor and indoor environments, compared with the manual approach of periodicity detection. Results obtained for the proposed periodicity detection algorithm are compar- able with the manual approach of periodicity detection (Yogarajah et al., 2009). In most of the cases the proposed algorithm gave accurate and reliable results. Therefore, the auto- mated periodicity detection algorithm can be applied to cases when the given data set is large and the manual approach of periodicity detection is not suitable. Even though the automated approach of periodicity detection is particularly aimed at hand waving action sequences, it could also be extended to other types of action sequences where the cycle detection of an action is required.

Figure 3. For Individual I1 (a) represents the corresponding gray scale image of R and (b) represents the periodicity measure graph.

Figure 4. For Individual I2 (a) represents the corresponding gray scale image of R and (b) represents the periodicity measure graph.

Motion feature extraction

The calculated number of frames in the previous section represents the start and the end position of the hand waving action sequence. The complete motion information can now be represented as a single image template using the motion history image (MHI) approach. The MHI is a temporal template that represents the movement with respect to time (Bobick & Davis, 2001). Figure 5(b) illustrates the MHI technique used for extracting the style of hand waving action sequence for individuals I1 and I2.

In the work presented here we intend to architect a system design for the scenario rep- resented in Figure 1, where one single camera is at a fixed location to capture an individ- uals’ actions. However, a more robust design to this approach can be presented by considering multiple cameras, where the users’ actions are captured from different views. Nevertheless in the current work we preferably assume that there is a single camera being used to capture individual actions. The case of multiple cameras can be a possible pointer to future work in this area. Additionally, in an ideal scenario, the individual is ideally positioned correctly in relation to the camera position and the captured view is invariant to the geometrical variations such as translation, scale and rotation. Nevertheless, in different conditions the same individual may pose differently in relation to the camera position indicating appropriate variations in the captured view. Hence for the same indi- vidual, the captured sequences are sensitive to the geometrical variations.

Figure 5. MHI for individuals I1 and I2.

The extracted MHI is manually cropped to a rectangular area that contains only the hand waving action. Cropping only the area of interest eliminates the translational effect in a given template. Due to the variation in the size of the individual and the zoom of the camera, the cropped rectangular area is further resized to a block area of height, H=60 and width, W=80, using bilinear interpolation (Leibe, Seemann, & Schiele, 2005). The aim of resizing the extracted image is to eliminate any existing scaling effect. The cropped and resized MHI is the final image from the feature extraction phase.

Handling variations

The extracted MHI images may have slight to significant variations for the same action carried out by the same individual at different points of time due to an individual’s mood changes. In the work described here, it is assumed that the individual is facing the front camera and there may be only slight rotational variations. To handle scenarios where there are slight rotational variations, a further level of feature extraction and edge detection is applied to both the training and test templates; followed by dilation of the test templates.

Edge detection is required to filter out the irrelevant information and significantly reduce the amount of data, thus preserving the essential structural properties of a given image (Bertenthal & Pinto, 1993). Specifically Canny edge detection (Canny, 1986) is applied to the given cropped and resized MHI images. In the training phase after detect- ing the edges, an ideal shape, training-template, is calculated using the mean value of the set of cropped and resized MHI image of an individual. The training-template is a reference or model for a particular individual.

In the testing phase, the given cropped and resized MHI image is dilated and referred to as a test-template. Dilation is applied to incorporate slight rotational variations occurring in the given test templates; it handles the slight rotational variations occurring in the image and thus allows slight intra-class variations (Yogarajah et al., 2009). Figure 6 illustrates the training-template and the dilated test-template samples.

Following the extraction of the training-template and the dilated test-template in the feature extraction, the next stage is the matching phase. In the matching phase the given dilated test-template is matched against the set of stored training-templates.

Matching and decision phase

The Hausdorff distance (HD) method is deployed as a matching method to calculate the similarity measure between the test-template and the training-template. The HD measures how far the given two set of data points are from each other (Felzenszwalb, 2001). In com- puting the template matching between the training-template and the test-template, the data points in the dilated version of test-template should be at most d distance away from the training-template. The training-template and dilated test-template can be rep- resented as n dimensional vectors where n is equal to the size of cropped and resized MHI i.e. H × W pixels. Let vectraining-template be the vector version of training-template and vecdilated_test-template then is the vector version of dilated test-template. Following this the dot product, vectraining-template.vecdilated_test-template will give the number of points from the dilated test-template contained in the training-template. The decision function based on the HD as a linear threshold function given as:

hK(training−template, test−template) ≤ d vectraining−template ·vecdilated tes−template ≥ K. (3)

The greater the value of K, the better the similarity measure value will be between the test- template and the training-template. The chosen value for K is calculated through exper- imental testing and is based on experimental results.

Results and discussion

The evaluation phase consists of two set of experiments. In Experiment 1, indoor sequences are divided into two equal halves, the first half is used for the training phase and the second half is used for the testing phase. In Experiment 2, the indoor sequences are used in the training phase and the outdoor sequences are used in the testing phase. Figure 7 shows a sample training-template.

Figure 6. (a) The training-template and (b) the dilated test-template.

Figure 7. Sample training-template.

Table 2 shows the similarity measure values, hK, between the individual’s test-template matched against all the training-templates in Experiment 1. Table 3 shows the similarity measure values, hK, between the individual’s test-template matched against all the train- ing-templates in Experiment 2. Tables 2 and 3 both have a maximum value in each row indicating a higher similarity between the given test-template against the appropriate training-template. It can be noted from Tables 2 and 3 that the diagonal values represent the similarity measure value for a correct person identification. By considering all the diag- onal values the suitable value for K is selected as 280 based on Experiments 1 and 2. Based on the selected value K, a recognition rate is calculated for each individual’s identification system.

A total of 1250 samples were used to calculate the recognition rate, that is, 625 samples from each table. In a few cases there were more than one similarity measure value greater than 280, resulting in a false classification of individuals. As can be seen in the case of columns I4 and I5 in Tables 2 and 3, 5 values are greater than or equal to 280 in addition to the diagonal values. Thus, these values provide individuals ‘false recognition’. In con- ducted experiments a total of 122 individuals were identified as ‘false recognitions’. The recognition rate is calculated as follows:

where PN is positive or negative/false samples and TS is the total number of samples. The conducted experiments gave positive and false recognition rates of 90.24% and 9.76%, respectively (Yogarajah et al., 2010).

The terrorist threat landscape has evolved considerably in recent times and has seen a transformative shift away from traditional attack methodologies toward low sophisti- cation, high impact and high reputational damage attacks. Indeed, it has also seen the emergence of insider threat as a complex security challenge which must also be under- stood, countered and mitigated. As a consequence, there is a fundamental need to ensure that the security mechanisms that are put in place to protect CNI and infrastructure assets are adequate, interoperable with other security systems and enhance the security and safety of those using and managing them. The results obtained through the novel methodological approach presented in this research demonstrate the potentiality of this as a new behavioral biometric verification and identification technique for working toward augmenting security and safety. While the results obtained suggest that the par- ticular behavioral biometric type utilized in the methodology of this research enabled identification with high levels of success, in practice, the hand waving action should be adopted as part of a multi-layered strategic and operational security policy.

Countering and mitigating such threats depends on procedures and protocols that function as part of an inter-connected security approach to minimize the threat and risk to CNI and infrastructure assets. In many instances though, these procedures and proto- cols are approached as individual functions (e.g. searching and identification) and rely on separate technology that makes the security process more intrusive and overt, as well as time consuming. There is significant potential to incorporate this type of behavioral biometrics as a second layer of authentication/verification at infrastructure asset sites in order to alleviate any potential exploitation of physical biometrics by insiders and/or attackers. One mechanism that could be used to both achieve this and enhance the secur- ity procedures of the site would be through alignment or integration with full body scan- ners. In this case, when the persons being searched within the full body scanners adopt the stance required through raising their arms in a predefined motion, they could also be potentially verified or identified.

Conclusion and future work

In this paper, we proposed a video-based security system using style of hand waving as a novel behavioral biometric feature for individual identification. The style of hand waving action sequence is considered as the information extracted from the video sequence for identification. An automated periodicity detection algorithm is proposed so as to detect the number of frames for one complete hand waving action. The proposed periodicity detection is comparable with the manual approach of detecting the periodicity and gave accurate results. The proposed periodicity detection algorithm can also be applied to larger data sets and the results obtained indicate that the style of hand waving action can be used to identify individuals. The developed methodologies could be inte- grated as new behavioral biometric features in multi-biometric person recognition systems for identification, screening purposes, verification and detection. Such systems can be used to counter the threat of terrorism and to protect a nation’s CNI and infrastruc- ture assets. In future work, we would like to incorporate style of multiple actions for person identification in a video sequence for security systems.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes on contributors

Priyanka Chaurasia is Research Associate in the Ambient Intelligence Team based at the Intelli- gence Systems Research Centre at Ulster University. Her research is concerned with the evolution of systems that work by learning individuals’ behaviors and explore patterns in them. Her research interests include artificial intelligence, smart home environments, assistive living, biometrics, signa- ture identification and pattern recognition.

Pratheepan Yogarajah is a Lecturer in Computing Science at Ulster University. His research interests include computer vision, machine learning, image processing, steganography & digital watermark- ing and biometrics.

Joan Condell is a Senior Lecturer in Computer Science at Ulster University. Her research interests include steganography, image processing, biometrics and vision for robotics and multimedia.

Girijesh Prasad currently holds the post of Professor of Intelligent Systems in the School of Comput- ing and Intelligent Systems at Ulster University. He is an executive member of the Intelligence Systems Research Centre and leads the Brain-Computer Interface and Assistive Technology (BCIAT) research team.

David McIlhatton works on security research and development within the Built Environment Research Institute at Ulster University with research interests in counter-terrorism, security and protection.

Rachel Monaghan is a Senior Lecturer in Criminology at Ulster University. Her research interests focus on the area of political violence, informal justice, single-issue terrorism, counter-terrorism and crime and insecurity.


Augusto, J. C. (2005). Temporal reasoning for decision support in medicine. Artificial Intelligence in Medicine, 33(1), 1–24.

Augusto, J. C., & McCullagh, P. (2007). Ambient intelligence: Concepts and Applications. Computer Science and Information Systems, 4(1), 1–28.

Bertenthal, B. I., & Pinto, P. (1993). Complementary processes in the perception and production of human movements. In L. B. Smith & E. Thelen (Eds.), A dynamic systems approach to development: Applications (pp. 209–239). Cambridge, MA: MIT Press.

Blakemore, B. (2012). Cyberspace, cyber crime and cyber terrorism. In I. Awan & B. Blakemore (Eds.), Policing cyber hate, cyber threats and cyber terrorism (pp. 5–20). Farnham: Ashgate.

Bobick, A., & Davis, J. (2001). The recognition of human movement using temporal templates. IEEE Transaction on Pattern Analysis and Machine Intelligence, 23(3), 257–267.

Bonneau, J., Herley, C., van Oorschot, P. C., & Stajano, F. (2012). The quest to replace passwords: A fra- mework for comparative evaluation of web authentication schemes. Proceedings of the IEEE Symposium on Security and Privacy (pp. 553–567), San Francisco, CA.

Boulgouris, N. V., Hatzinakos, D., & Plataniotis, K. N. (2005). Gait recognition: A challenging signal pro- cessing technology for biometric identification. IEEE Signal Processing Magazine, 22(6), 78–90.

Brömme, A. (2003). A classification of biometric signatures. Proceedings of the International Conference on Multimedia and Expo (pp. 17–20), Baltimore, MD.

Cabinet Office. (2010). Strategic framework and policy statement on improving resilience of critical infrastructure to disruption from natural hazards. London: Author.

CAIN. (2015). List of significant violent incidents. Retrieved from

Canny, J. (1986). A computational approach to edge detection. IEEE Transaction Pattern Analysis and Machine Intelligence (PAMI), 8(6), 679–698.

Centre for the Protection of National Infrastructure. (2016). The national infrastructure. Retrieved from

Chaurasia, P., Yogarajah, P., Condell, J., Prasad, G., McIlhatton, D., & Monaghan, R. (2015). Biometrics and counter-terrorism: The case of gait recognition. Behavioral Sciences of Terrorism and Political Aggression, 7(3), 210–226.

Cornish, P., Livingstone, D., Clemente, D., & Yorke, C. (2011). Cyber security and the UK’s critical national infrastructure. London: Royal Institute of International Affairs.

Department of Homeland Security. (2015). Critical infrastructure sectors. Retrieved from

Di Nardo, J. V. (2009). Biometric technologies: Functionality, emerging trends, and vulnerabilities. Journal of Applied Security Research, 4, 194–216.

Ellis, P. D. (2014). Lone wolf terrorism and weapons of mass destruction: An examination of capabilities and countermeasures. Terrorism and Political Violence, 26(1), 211–225.

Felzenszwalb, P. F. (2001). Learning models for object recognition. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 1056–1062), Kauai, HI.

Ferris, J. (2014, February 19). Terrorist attack shows vulnerability in critical infrastructure. The Daily Signal. Retrieved from

Gleick, P. H. (2006). Water and terrorism. Water Policy, 8, 481–503.

Hernández-Encuentra, E., Pousada, P., & Gómez-Zúñiga, B. (2009). ICT and older people: Beyond usability. Educational Gerontology, 35(3), 226–245.

Jain, A. K., Bolle, R., & Pankanti, S. (Eds.). (1999). Biometrics: Personal identification in networked society. London: Kluwer Academic.

Jain, A. K., Nandakumar, K., & Nagar, A. (2008). Biometric template security. EURASIP Journal on Advances in Signal Processing, 1–17. doi:10.1155/2008/579416

Leibe, B., Seemann, E., & Schiele, B. (2005). Pedestrian detection in crowded scenes. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 878–885). Washington, DC.

Maltoni, D., Maio, D., Jain, A. K., & Prabhakar, S. (2009). Handbook of fingerprint recognition (2nd ed.). London: Springer.

Marasco, E., & Ross, A. (2014). A survey on anti-spoofing schemes for fingerprint recognition systems. ACM Computing Survey, 47(2), 1–36.

Pankanti, S., & Jain, A. K. (2008). Beyond fingerprinting. Scientific American, 299(3), 79–81.

Schuldt, C., Laptev, I., & Caputo, B. (2004). Recognizing human actions: A local SVM approach. Proceedings of the ICPR (Vol. III, pp. 32–36), Cambridge, UK.

The Scottish Government. (2011). Secure and resilient: A strategic framework for critical national infrastructure in Scotland. Edinburgh: Author.

Vacca, J. R. (2007). Biometric technologies and verification systems. Oxford: Elsevier Science & Technology.

Wang, L., & Geng, X. (Eds.). (2010). Behavioral biometrics for human identification: Intelligent applications. Hershey, PA: IGI Global.

Yogarajah, P., Condell, J. V., & Prasad, G. (2010) Individual identification from video based on ‘behavioural biometrics’. In L. Wang & X. Geng (Eds.), Behavioral biometrics for human identification: Intelligent applications (pp. 75–101). Hershey, PA: IGI Global.

Yogarajah, P., Prasad, G., & Condell, J. V. (2009). Style of action based individual recognition in video sequences. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC, pp. 1237–1242), Singapore.

Young, C. S. (2015). The science and technology of counterterrorism: Measuring physical and electronic security risk. London: Butterworth-Heinemann.

Zegart, A. (2015). Threats within and without insider threats and organizational root causes: The 2009 fort hood terrorist attack. Parameters, 42(2), 35–46.