Search results for: generating sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2169

Search results for: generating sets

1959 Resilience-Vulnerability Interaction in the Context of Disasters and Complexity: Study Case in the Coastal Plain of Gulf of Mexico

Authors: Cesar Vazquez-Gonzalez, Sophie Avila-Foucat, Leonardo Ortiz-Lozano, Patricia Moreno-Casasola, Alejandro Granados-Barba

Abstract:

In the last twenty years, academic and scientific literature has been focused on understanding the processes and factors of coastal social-ecological systems vulnerability and resilience. Some scholars argue that resilience and vulnerability are isolated concepts due to their epistemological origin, while others note the existence of a strong resilience-vulnerability relationship. Here we present an ordinal logistic regression model based on the analytical framework about dynamic resilience-vulnerability interaction along adaptive cycle of complex systems and disasters process phases (during, recovery and learning). In this way, we demonstrate that 1) during the disturbance, absorptive capacity (resilience as a core of attributes) and external response capacity explain the probability of households capitals to diminish the damage, and exposure sets the thresholds about the amount of disturbance that households can absorb, 2) at recovery, absorptive capacity and external response capacity explain the probability of households capitals to recovery faster (resilience as an outcome) from damage, and 3) at learning, adaptive capacity (resilience as a core of attributes) explains the probability of households adaptation measures based on the enhancement of physical capital. As a result, during the disturbance phase, exposure has the greatest weight in the probability of capital’s damage, and households with absorptive and external response capacity elements absorbed the impact of floods in comparison with households without these elements. At the recovery phase, households with absorptive and external response capacity showed a faster recovery on their capital; however, the damage sets the thresholds of recovery time. More importantly, diversity in financial capital increases the probability of recovering other capital, but it becomes a liability so that the probability of recovering the household finances in a longer time increases. At learning-reorganizing phase, adaptation (modifications to the house) increases the probability of having less damage on physical capital; however, it is not very relevant. As conclusion, resilience is an outcome but also core of attributes that interacts with vulnerability along the adaptive cycle and disaster process phases. Absorptive capacity can diminish the damage experienced by floods; however, when exposure overcomes thresholds, both absorptive and external response capacity are not enough. In the same way, absorptive and external response capacity diminish the recovery time of capital, but the damage sets the thresholds in where households are not capable of recovering their capital.

Keywords: absorptive capacity, adaptive capacity, capital, floods, recovery-learning, social-ecological systems

Procedia PDF Downloads 106
1958 Clustering-Based Computational Workload Minimization in Ontology Matching

Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris

Abstract:

In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.

Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching

Procedia PDF Downloads 221
1957 C2N2 Adsorption on the Surface of a BN Nanosheet: A DFT Study

Authors: Maziar Noei

Abstract:

Calculation showed that when the nanosheet is doped by Si, the adsorption energy is about -85.62 to -87.43kcal/mol and also the amount of HOMO/LUMO energy gap (Eg) will reduce significantly. Boron nitride nanosheet is a suitable adsorbent for cyanogen and can be used in separation processes cyanogen. It seems that nanosheet (BNNS) is a suitable semiconductor after doping. The doped BNNS in the presence of cyanogens (C2N2) an electrical signal is generating directly and, therefore, can potentially be used for cyanogen sensors.

Keywords: nanosheet, DFT, cyanogen, sensors

Procedia PDF Downloads 262
1956 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder

Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen

Abstract:

Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.

Keywords: natural language inference, explanation generation, variational auto-encoder, generative model

Procedia PDF Downloads 122
1955 Effects of Handgrip Isometric Training in Blood Pressure of Patients with Peripheral Artery Disease

Authors: Raphael M. Ritti-Dias, Marilia A. Correia, Wagner J. R. Domingues, Aline C. Palmeira, Paulo Longano, Nelson Wolosker, Lauro C. Vianna, Gabriel G. Cucato

Abstract:

Patients with peripheral arterial disease (PAD) have a high prevalence of hypertension, which contributes to a high risk of acute cardiovascular events and cardiovascular mortality. Strategies to reduce cardiovascular risk of these patients are needed. Meta-analysis studies have shown that isometric handgrip training promotes reductions in clinical blood pressure in normotensive, pre-hypertensive and hypertensive individuals. However, the effect of this exercise training on other cardiovascular function indicators in PAD patients remains unknown. Thus, the aim of this study was to analyze the effects of isometric handgrip training on blood pressure in patients with PAD. In this clinical trial, 28 patients were randomly allocated into two groups: isometric handgrip training (HG) and control (CG). The HG conducted the unilateral handgrip training three days per week (four sets of two minutes, with 30% of maximum voluntary contraction with an interval of four minutes between sets). CG was encouraged to increase their physical activity levels. At baseline and after eight weeks blood pressure and heart rate were obtained. ANOVA two-way for repeated measures with the group (GH and GC) and time (pre- and post-intervention) as factors was performed. After 8 weeks of training there were no significant changes in systolic blood pressure (HG pre 141 ± 24.0 mmHg vs. HG post 142 ± 22.0 mmHg; CG pre 140 ± 22.1 mmHg vs. CG post 146 ± 16.2 mmHg; P=0.18), diastolic blood pressure (HG pre 74 ± 10.4 mmHg vs. HG post 74 ± 11.9 mmHg; CG pre 72 ± 6.9 mmHg vs. CG post 74 ± 8.0 mmHg; P=0.22) and heart rate (HG pre 61 ± 10.5 bpm vs. HG post 62 ± 8.0 bpm; CG pre 64 ± 11.8 bpm vs. CG post 65 ± 13.6 bpm; P=0.81). In conclusion, our preliminary data indicate that isometric handgrip training did not modify blood pressure and heart rate in patients with PAD.

Keywords: blood pressure, exercise, isometric, peripheral artery disease

Procedia PDF Downloads 308
1954 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors

Authors: Sudhir Kumar Singh, Debashish Chakravarty

Abstract:

Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.

Keywords: finite element method, geotechnical engineering, machine learning, slope stability

Procedia PDF Downloads 71
1953 The Effect of Hypertrophy Strength Training Using Traditional Set vs. Cluster Set on Maximum Strength and Sprinting Speed

Authors: Bjornar Kjellstadli, Shaher A. I. Shalfawi

Abstract:

The aim of this study was to investigate the effect of strength training Cluster set-method compared to traditional set-method 30 m sprinting time and maximum strength in squats and bench-press. Thirteen Physical Education students, 7 males and 6 females between the age of 19-28 years old were recruited. The students were random divided in three groups. Traditional set group (TSG) consist of 2 males and 2 females aged (±SD) (22.3 ± 1.5 years), body mass (79.2 ± 15.4 kg) and height (177.5 ± 11.3 cm). Cluster set group (CSG) consist of 3 males and 2 females aged (22.4 ± 3.29 years), body mass (81.0 ± 24.0 kg) and height (179.2 ± 11.8 cm) and a control group (CG) consist of 2 males and 2 females aged (21.5 ± 2.4 years), body mass (82.1 ± 17.4 kg) and height (175.5 ± 6.7 cm). The intervention consisted of performing squat and bench press at 70% of 1RM (twice a week) for 8 weeks using 10 repetition and 4 sets. Two types of strength-training methods were used , cluster set (CS) where the participants (CSG) performed 2 reps 5 times with a 10 s recovery in between reps and 50 s recovery between sets, and traditional set (TS) where the participants (TSG) performed 10 reps each set with 90 s recovery in between sets. The pre-tests and post-tests conducted were 1 RM in both squats and bench press, and 10 and 30 m sprint time. The 1RM test were performed with Eleiko XF barbell (20 kg), Eleiko weight plates, rack and bench from Hammerstrength. The speed test was measured with the Brower speed trap II testing system (Brower Timing Systems, Utah, USA). The participants received an individualized training program based on the pre-test of the 1RM. In addition, a mid-term test of 1RM was carried out to adjust training intensity. Each training session were supervised by the researchers. Beast sensors (Milano, Italy) were also used to monitor and quantify the training load for the participants. All groups had a statistical significant improvement in bench press 1RM (TSG 1RM from 56.3 ± 28.9 to 66 ± 28.5 kg; CSG 1RM from 69.8 ± 33.5 to 77.2 ± 34.1 kg and CG 1RM from 67.8 ± 26.6 to 72.2 ± 29.1 kg), whereas only the TSG (1RM from 84.3 ± 26.8 to 114.3 ± 26.5 kg) and CSG (1RM from 100.4 ± 33.9 to 129 ± 35.1 kg) had a statistical significant improvement in Squats 1RM (P < 0.05). However, a between groups examination reveals that there were no marked differences in 1RM squat performance between TSG and CSG (P > 0.05) and both groups had a marked improvements compared to the CG (P < 0.05). On the other hand, no differences between groups were observed in Bench press 1RM. The within groups results indicate that none of the groups had any marked improvement in the distances from 0-10 m and 10-30 m except the CSG which had a notable improvement in the distance from 10-30 m (-0.07 s; P < 0.05). Furthermore, no differences in sprinting abilities were observed between groups. The results from this investigation indicate that traditional set strength training at 70% of 1RM gave close results compared to Cluster set strength training at the same intensity. However, the results indicate that the cluster set had an effect on flying time (10-30 m) indicating that the velocity at which those repetitions were performed could be the explanation factor of this this improvement.

Keywords: physical performance, 1RM, pushing velocity, velocity based training

Procedia PDF Downloads 139
1952 The Effect of Eight-Week Medium Intensity Interval Training and Curcumin Intake on ICMA-1 and VCAM-1 Levels in Menopausal Fat Rats

Authors: Abdolrasoul Daneshjoo, Fatemeh Akbari Ghara

Abstract:

Background and Purpose: Obesity is an increasing factor in cardiovascular disease and serum levels of cellular adhesion molecule. It plays an important role in predicting risk for coronary artery disease. The purpose of this research was to study the effect of eight weeks moderate intensity interval training and curcumin intake on ICAM-1 & VCAM-1 levels of menopausal fat rats. Materials and methods: in this study, 28 Wistar Menopausal fat rats aged 6-8 weeks with an average weight of 250-300 (gr) were randomly divided into four groups: control, curcumin supplement, moderate intensity interval training and moderate intensity interval training + curcumin supplement. (7 rats each group). The training program was planned as 8 weeks and 3 sessions per week. Each session consisted of 10 one-min sets with 50 percent intensity and the 2-minutes interval between sets in the first week. Subjects started with 14 meters per minute, and 2 (m/min) was added to increase their speed weekly until the speed of 28 (m/min) in the 8th week. Blood samples were taken 48 hours after the last training session, and ICAM-1 A and VCAM-1 levels were measured. SPSS software, one-way analysis of variance (ANOVA) and Pearson correlation coefficient were used to assess the results. Results: The results showed that eight weeks of training and taking curcumin had significant effects on ICAM-1 levels of the rats (p ≤ 0.05). However, it had no significant effect on VCAM-1 levels in menopausal obese rates (p ≥ 0.05). There was no significant correlation between the levels of ICAM-1 and VCAM-1 in eight weeks training and taking curcumin. Conclusion: Implementation of moderate intensity interval training and the use of curcumin decreased ICAM-1 significantly.

Keywords: curcumin, interval training , ICMA, VCAM

Procedia PDF Downloads 172
1951 Analyzing Environmental Emotive Triggers in Terrorist Propaganda

Authors: Travis Morris

Abstract:

The purpose of this study is to measure the intersection of environmental security entities in terrorist propaganda. To the best of author’s knowledge, this is the first study of its kind to examine this intersection within terrorist propaganda. Rosoka, natural language processing software and frame analysis are used to advance our understanding of how environmental frames function as emotive triggers. Violent jihadi demagogues use frames to suggest violent and non-violent solutions to their grievances. Emotive triggers are framed in a way to leverage individual and collective attitudes in psychological warfare. A comparative research design is used because of the differences and similarities that exist between two variants of violent jihadi propaganda that target western audiences. Analysis is based on salience and network text analysis, which generates violent jihadi semantic networks. Findings indicate that environmental frames are used as emotive triggers across both data sets, but also as tactical and information data points. A significant finding is that certain core environmental emotive triggers like “water,” “soil,” and “trees” are significantly salient at the aggregate level across both data sets. All environmental entities can be classified into two categories, symbolic and literal. Importantly, this research illustrates how demagogues use environmental emotive triggers in cyber space from a subcultural perspective to mobilize target audiences to their ideology and praxis. Understanding the anatomy of propaganda construction is necessary in order to generate effective counter narratives in information operations. This research advances an additional method to inform practitioners and policy makers of how environmental security and propaganda intersect.

Keywords: propaganda analysis, emotive triggers environmental security, frames

Procedia PDF Downloads 114
1950 How to Use Big Data in Logistics Issues

Authors: Mehmet Akif Aslan, Mehmet Simsek, Eyup Sensoy

Abstract:

Big Data stands for today’s cutting-edge technology. As the technology becomes widespread, so does Data. Utilizing massive data sets enable companies to get competitive advantages over their adversaries. Out of many area of Big Data usage, logistics has significance role in both commercial sector and military. This paper lays out what big data is and how it is used in both military and commercial logistics.

Keywords: big data, logistics, operational efficiency, risk management

Procedia PDF Downloads 609
1949 Flow Links Curiosity and Creativity: The Mediating Role of Flow

Authors: Nicola S. Schutte, John M. Malouff

Abstract:

Introduction: Curiosity is a positive emotion and motivational state that consists of the desire to know. Curiosity consists of several related dimensions, including a desire for exploration, deprivation sensitivity, and stress tolerance. Creativity involves generating novel and valuable ideas or products. How curiosity may prompt greater creativity remains to be investigated. The phenomena of flow may link curiosity and creativity. Flow is characterized by intense concentration and absorption and gives rise to optimal performance. Objective of Study: The objective of the present study was to investigate whether the phenomenon of flow may link curiosity with creativity. Methods and Design: Fifty-seven individuals from Australia (45 women and 12 men, mean age of 35.33, SD=9.4) participated. Participants were asked to design a program encouraging residents in a local community to conserve water and to record the elements of their program in writing. Participants were then asked to rate their experience as they developed and wrote about their program. Participants rated their experience on the Dimensional Curiosity Measure sub-scales assessing the exploration, deprivation sensitivity, and stress tolerance facets of curiosity, and the Flow Short Scale. Reliability of the measures as assessed by Cronbach's alpha was as follows: Exploration Curiosity =.92, Deprivation Sensitivity Curiosity =.66, Stress Tolerance Curiosity =.93, and Flow=.96. Two raters independently coded each participant’s water conservation program description on creativity. The mixed-model intraclass correlation coefficient for the two sets of ratings was .73. The mean of the two ratings produced the final creativity score for each participant. Results: During the experience of designing the program, all three types of curiosity were significantly associated with the flow. Pearson r correlations were as follows: Exploration Curiosity and flow, r =.68 (higher Exploration Curiosity was associated with more flow); Deprivation Sensitivity Curiosity and flow, r =.39 (higher Deprivation Sensitivity Curiosity was associated with more flow); and Stress Tolerance Curiosity and flow, r = .44 (more stress tolerance in relation to novelty and exploration was associated with more flow). Greater experience of flow was significantly associated with greater creativity in designing the water conservation program, r =.39. The associations between dimensions of curiosity and creativity did not reach significance. Even though the direct relationships between dimensions of curiosity and creativity were not significant, indirect relationships through the mediating effect of the experience of flow between dimensions of curiosity and creativity were significant. Mediation analysis using PROCESS showed that flow linked Exploration Curiosity with creativity, standardized beta=.23, 95%CI [.02,.25] for the indirect effect; Deprivation Sensitivity Curiosity with creativity, standardized beta=.14, 95%CI [.04,.29] for the indirect effect; and Stress Tolerance Curiosity with creativity, standardized beta=.13, 95%CI [.02,.27] for the indirect effect. Conclusions: When engaging in an activity, higher levels of curiosity are associated with greater flow. More flow is associated with higher levels of creativity. Programs intended to increase flow or creativity might build on these findings and also explore causal relationships.

Keywords: creativity, curiosity, flow, motivation

Procedia PDF Downloads 156
1948 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy

Authors: Chhabi Nigam, S. Ramakrishnan

Abstract:

This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.

Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR

Procedia PDF Downloads 186
1947 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals

Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou

Abstract:

In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.

Keywords: continuous wavelet transform, convolution neural net-work, gated recurrent unit, health indicators, remaining useful life

Procedia PDF Downloads 101
1946 Impact of Map Generalization in Spatial Analysis

Authors: Lin Li, P. G. R. N. I. Pussella

Abstract:

When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.

Keywords: generalization, GIS, scales, spatial analysis

Procedia PDF Downloads 305
1945 Suitability of Green Macroalgae Porteresia coarctata as a Feed Form Macrobrachium rosenbergii

Authors: Rajrupa Ghosh, Abhijit Mitra

Abstract:

Future use of animal protein sources in prawn feeds is expected to be considerably reduced as a consequence of increasing economical, environmental and safety issues. Of main concern has been the use of expensive marine protein sources, such as fish meal which often results in fouling of water quality and disease outbreak in cultured species. To determine prawn capacity to use practical feeds with plant proteins as replacement ingredients to animal protein sources, 8-months growth trial was conducted in two sets of ponds using juvenile (0.02 gm) Macrobrachium rosenbergii. Among the two sets, one set (comprising of three ponds) is experimental pond included formulated feed prepared with 30% Porteresia coarctata dust along with other general ingredients and another set (comprising of another three ponds) is control pond with commercial feed. Mean final weight, percent weight gain, final net yield, feed conversion ratio and survival were evaluated. Higher condition index values, survival rate and gain in prawn weight were observed in experimental pond compared to control pond. Low FCR values were observed in the experimental pond than the control pond. Evaluation of production parameters at the end of the study demonstrated significant differences (P ≥ 0.05) among two ponds. The variation may be attributed to specially formulated plant based feed that not only boosted up the growth of prawns, but also upgraded the ambient aquatic health. These results indicate that fish meal can be replaced with algal protein sources in diets without affecting prawn growth and production.

Keywords: macrobrachium rosenbergii, porteresia coarctata, Indian sundarbans, feed

Procedia PDF Downloads 331
1944 Enhancing Understanding and Engagement in Linear Motion Using 7R-Based Module

Authors: Mary Joy C. Montenegro, Voltaire M. Mistades

Abstract:

This action research was implemented to enhance the teaching of linear motion and to improve students' conceptual understanding and engagement using a developed 7R-based module called 'module on vectors and one-dimensional kinematics' (MVOK). MVOK was validated in terms of objectives, contents, format, and language used, presentation, usefulness, and overall presentation. The validation process revealed a value of 4.7 interpreted as 'Very Acceptable' with a substantial agreement (0. 60) from the validators. One intact class of 46 Grade 12 STEM students from one of the public schools in Paranaque City served as the participants of this study. The students were taught using the module during the first semester of the academic year 2019–2020. Employing the mixed-method approach, quantitative data were gathered using pretest/posttest, activity sheets, problem sets, and survey form, while qualitative data were obtained from surveys, interviews, observations, and reflection log. After the implementation, there was a significant difference of 18.4 on students’ conceptual understanding as shown in their pre-test and post-test scores on the 24-item test with a moderate Hake gain equal to 0.45 and an effect size of 0.83. Moreover, the scores on activity and problem sets have a 'very good' to 'excellent' rating, which signifies an increase in the level of students’ conceptual understanding. There also exists a significant difference between the mean scores of students’ engagement overall (t= 4.79, p = 0.000, p < 0.05) and in the dimension of emotion (t = 2.51, p = 0.03) and participation/interaction (t = 5.75, p = 0.001). These findings were supported by gathered qualitative data. Positive views were elicited from the students since it is an accessible tool for learning and has well-detailed explanations and examples. The results of this study may substantiate that using MVOK will lead to better physics content understanding and higher engagement.

Keywords: conceptual understanding, engagement, linear motion, module

Procedia PDF Downloads 104
1943 Entropy Measures on Neutrosophic Soft Sets and Its Application in Multi Attribute Decision Making

Authors: I. Arockiarani

Abstract:

The focus of the paper is to furnish the entropy measure for a neutrosophic set and neutrosophic soft set which is a measure of uncertainty and it permeates discourse and system. Various characterization of entropy measures are derived. Further we exemplify this concept by applying entropy in various real time decision making problems.

Keywords: entropy measure, Hausdorff distance, neutrosophic set, soft set

Procedia PDF Downloads 224
1942 Ultrasound Therapy: Amplitude Modulation Technique for Tissue Ablation by Acoustic Cavitation

Authors: Fares A. Mayia, Mahmoud A. Yamany, Mushabbab A. Asiri

Abstract:

In recent years, non-invasive Focused Ultrasound (FU) has been utilized for generating bubbles (cavities) to ablate target tissue by mechanical fractionation. Intensities >10 kW/cm² are required to generate the inertial cavities. The generation, rapid growth, and collapse of these inertial cavities cause tissue fractionation and the process is called Histotripsy. The ability to fractionate tissue from outside the body has many clinical applications including the destruction of the tumor mass. The process of tissue fractionation leaves a void at the treated site, where all the affected tissue is liquefied to particles at sub-micron size. The liquefied tissue will eventually be absorbed by the body. Histotripsy is a promising non-invasive treatment modality. This paper presents a technique for generating inertial cavities at lower intensities (< 1 kW/cm²). The technique (patent pending) is based on amplitude modulation (AM), whereby a low frequency signal modulates the amplitude of a higher frequency FU wave. Cavitation threshold is lower at low frequencies; the intensity required to generate cavitation in water at 10 kHz is two orders of magnitude lower than the intensity at 1 MHz. The Amplitude Modulation technique can operate in both continuous wave (CW) and pulse wave (PW) modes, and the percentage modulation (modulation index) can be varied from 0 % (thermal effect) to 100 % (cavitation effect), thus allowing a range of ablating effects from Hyperthermia to Histotripsy. Furthermore, changing the frequency of the modulating signal allows controlling the size of the generated cavities. Results from in vitro work demonstrate the efficacy of the new technique in fractionating soft tissue and solid calcium carbonate (Chalk) material. The technique, when combined with MR or Ultrasound imaging, will present a precise treatment modality for ablating diseased tissue without affecting the surrounding healthy tissue.

Keywords: focused ultrasound therapy, histotripsy, inertial cavitation, mechanical tissue ablation

Procedia PDF Downloads 290
1941 Linguistic Summarization of Structured Patent Data

Authors: E. Y. Igde, S. Aydogan, F. E. Boran, D. Akay

Abstract:

Patent data have an increasingly important role in economic growth, innovation, technical advantages and business strategies and even in countries competitions. Analyzing of patent data is crucial since patents cover large part of all technological information of the world. In this paper, we have used the linguistic summarization technique to prove the validity of the hypotheses related to patent data stated in the literature.

Keywords: data mining, fuzzy sets, linguistic summarization, patent data

Procedia PDF Downloads 245
1940 Pruning Algorithm for the Minimum Rule Reduct Generation

Authors: Sahin Emrah Amrahov, Fatih Aybar, Serhat Dogan

Abstract:

In this paper we consider the rule reduct generation problem. Rule Reduct Generation (RG) and Modified Rule Generation (MRG) algorithms, that are used to solve this problem, are well-known. Alternative to these algorithms, we develop Pruning Rule Generation (PRG) algorithm. We compare the PRG algorithm with RG and MRG.

Keywords: rough sets, decision rules, rule induction, classification

Procedia PDF Downloads 499
1939 Establish Co-Culture System of Dehalococcoides and Sulfate-Reducing Bacteria to Generate Ferrous Sulfide for Reversing Sulfide-Inhibited Reductive Dechlorination

Authors: Po-Sheng Kuo, Che-Wei Lu, Ssu-Ching Chen

Abstract:

Chlorinated ethenes (CEs) constitute a predominant contaminant in Taiwan's native polluted sites, particularly in groundwater inundated with sulfate salts that substantially impede remediation efforts. The reduction of sulfate by sulfate-reducing bacteria (SRB) impairs the dechlorination efficiency of Dehalococcoides by generating hydrogen sulfide (H₂S), resulting in incomplete chloride degradation and thereby leading to the failure of bioremediation. In order to elucidate interactions between sulfate reduction and dechlorination, this study aims to establish a co-culture system of Dehalococcoides and SRB, overcoming H₂S inhibition by employing the synthesis of ferrous sulfide (FeS), which is commonly utilized in chemical remediation due to its high reduction potential. Initially, the study demonstrates that the addition of ferrous chloride (FeCl₂) effectively removed H₂S production from SRB and enhanced the degradation of trichloroethylene to ethene. This process overcomes the inhibition caused by H₂S produced by SRB in high sulfate environments. Compared to different concentrations of ferrous dosages for the biogenic generation of FeS, the efficiency was optimized by adding FeCl₂ at an equal ratio to the concentration of sulfate in the environment. This was more effective in removing H₂S and crystal particles under 10 times smaller than those synthesized under excessive FeCl₂ dosages, addressing clogging issues in situ remediation. Finally, utilizing Taiwan's indigenous dechlorinating consortium in a simulated high sulfate-contaminated environment, the biodiversity of microbial species was analyzed to reveal a higher species richness within the FeS group, conducive to ecological stability. This study validates the potential of the co-culture system in generating biogenic FeS under sulfate and CEs co-contamination, removing sulfate-reducing products, and improving CE remediation through integrated chemical and biological remediations.

Keywords: biogenic ferrous sulfide, chlorinated ethenes, Dehalococcoides, sulfate-reducing bacteria, sulfide inhibition

Procedia PDF Downloads 20
1938 Calculation of Electronic Structures of Nickel in Interaction with Hydrogen by Density Functional Theoretical (DFT) Method

Authors: Choukri Lekbir, Mira Mokhtari

Abstract:

Hydrogen-Materials interaction and mechanisms can be modeled at nano scale by quantum methods. In this work, the effect of hydrogen on the electronic properties of a cluster material model «nickel» has been studied by using of density functional theoretical (DFT) method. Two types of clusters are optimized: Nickel and hydrogen-nickel system. In the case of nickel clusters (n = 1-6) without presence of hydrogen, three types of electronic structures (neutral, cationic and anionic), have been optimized according to three basis sets calculations (B3LYP/LANL2DZ, PW91PW91/DGDZVP2, PBE/DGDZVP2). The comparison of binding energies and bond lengths of the three structures of nickel clusters (neutral, cationic and anionic) obtained by those basis sets, shows that the results of neutral and anionic nickel clusters are in good agreement with the experimental results. In the case of neutral and anionic nickel clusters, comparing energies and bond lengths obtained by the three bases, shows that the basis set PBE/DGDZVP2 is most suitable to experimental results. In the case of anionic nickel clusters (n = 1-6) with presence of hydrogen, the optimization of the hydrogen-nickel (anionic) structures by using of the basis set PBE/DGDZVP2, shows that the binding energies and bond lengths increase compared to those obtained in the case of anionic nickel clusters without the presence of hydrogen, that reveals the armor effect exerted by hydrogen on the electronic structure of nickel, which due to the storing of hydrogen energy within nickel clusters structures. The comparison between the bond lengths for both clusters shows the expansion effect of clusters geometry which due to hydrogen presence.

Keywords: binding energies, bond lengths, density functional theoretical, geometry optimization, hydrogen energy, nickel cluster

Procedia PDF Downloads 390
1937 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 293
1936 Dosimetric Comparison of Conventional Optimization Methods with Inverse Planning Simulated Annealing Technique

Authors: Shraddha Srivastava, N. K. Painuly, S. P. Mishra, Navin Singh, Muhsin Punchankandy, Kirti Srivastava, M. L. B. Bhatt

Abstract:

Various optimization methods used in interstitial brachytherapy are based on dwell positions and dwell weights alteration to produce dose distribution based on the implant geometry. Since these optimization schemes are not anatomy based, they could lead to deviations from the desired plan. This study was henceforth carried out to compare anatomy-based Inverse Planning Simulated Annealing (IPSA) optimization technique with graphical and geometrical optimization methods in interstitial high dose rate brachytherapy planning of cervical carcinoma. Six patients with 12 CT data sets of MUPIT implants in HDR brachytherapy of cervical cancer were prospectively studied. HR-CTV and organs at risk (OARs) were contoured in Oncentra treatment planning system (TPS) using GYN GEC-ESTRO guidelines on cervical carcinoma. Three sets of plans were generated for each fraction using IPSA, graphical optimization (GrOPT) and geometrical optimization (GOPT) methods. All patients were treated to a dose of 20 Gy in 2 fractions. The main objective was to cover at least 95% of HR-CTV with 100% of the prescribed dose (V100 ≥ 95% of HR-CTV). IPSA, GrOPT, and GOPT based plans were compared in terms of target coverage, OAR doses, homogeneity index (HI) and conformity index (COIN) using dose-volume histogram (DVH). Target volume coverage (mean V100) was found to be 93.980.87%, 91.341.02% and 85.052.84% for IPSA, GrOPT and GOPT plans respectively. Mean D90 (minimum dose received by 90% of HR-CTV) values for IPSA, GrOPT and GOPT plans were 10.19 ± 1.07 Gy, 10.17 ± 0.12 Gy and 7.99 ± 1.0 Gy respectively, while D100 (minimum dose received by 100% volume of HR-CTV) for IPSA, GrOPT and GOPT plans was 6.55 ± 0.85 Gy, 6.55 ± 0.65 Gy, 4.73 ± 0.14 Gy respectively. IPSA plans resulted in lower doses to the bladder (D₂

Keywords: cervical cancer, HDR brachytherapy, IPSA, MUPIT

Procedia PDF Downloads 155
1935 Collective Actions of the Women in Black of the Gaza Strip

Authors: Lina Fernanda González

Abstract:

Through this essay, an attempt will be made to make visible the work of the international network of the Women in Black (henceforth WB), on the one hand. On the other hand, the work of Women International Courts as a political practice will be showed as well, focusing their work into generating a collective identity - becoming thusly a peace building space, rescuing in this way the symbolic value of their practices consisting in peaceful resistance as political scenarios, that serve, too, a pedagogical and healing purposes.

Keywords: collective actions, women, peace, human rights and humanitarian international law

Procedia PDF Downloads 367
1934 Carbon Nanotubes (CNTs) as Multiplex Surface Enhanced Raman Scattering Sensing Platforms

Authors: Pola Goldberg Oppenheimer, Stephan Hofmann, Sumeet Mahajan

Abstract:

Owing to its fingerprint molecular specificity and high sensitivity, surface-enhanced Raman scattering (SERS) is an established analytical tool for chemical and biological sensing capable of single-molecule detection. A strong Raman signal can be generated from SERS-active platforms given the analyte is within the enhanced plasmon field generated near a noble-metal nanostructured substrate. The key requirement for generating strong plasmon resonances to provide this electromagnetic enhancement is an appropriate metal surface roughness. Controlling nanoscale features for generating these regions of high electromagnetic enhancement, the so-called SERS ‘hot-spots’, is still a challenge. Significant advances have been made in SERS research, with wide-ranging techniques to generate substrates with tunable size and shape of the nanoscale roughness features. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for miniaturised sensing devices. Carbon nanotubes (CNTs) have been concurrently, a topic of extensive research however, their applications for plasmonics has been only recently beginning to gain interest. CNTs can provide low-cost, large-active-area patternable substrates which, coupled with appropriate functionalization capable to provide advanced SERS-platforms. Herein, advanced methods to generate CNT-based SERS active detection platforms will be discussed. First, a novel electrohydrodynamic (EHD) lithographic technique will be introduced for patterning CNT-polymer composites, providing a straightforward, single-step approach for generating high-fidelity sub-micron-sized nanocomposite structures within which anisotropic CNTs are vertically aligned. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements with each of the EHD-CNTs individual structural units functioning as an isolated sensor. Further, gold-functionalized VACNTFs are fabricated as SERS micro-platforms. The dependence on the VACNTs’ diameters and density play an important role in the Raman signal strength, thus highlighting the importance of structural parameters, previously overlooked in designing and fabricating optimized CNTs-based SERS nanoprobes. VACNTs forests patterned into predesigned pillar structures are further utilized for multiplex detection of bio-analytes. Since CNTs exhibit electrical conductivity and unique adsorption properties, these are further harnessed in the development of novel chemical and bio-sensing platforms.

Keywords: carbon nanotubes (CNTs), EHD patterning, SERS, vertically aligned carbon nanotube forests (VACNTF)

Procedia PDF Downloads 301
1933 Sparse Representation Based Spatiotemporal Fusion Employing Additional Image Pairs to Improve Dictionary Training

Authors: Dacheng Li, Bo Huang, Qinjin Han, Ming Li

Abstract:

Remotely sensed imagery with the high spatial and temporal characteristics, which it is hard to acquire under the current land observation satellites, has been considered as a key factor for monitoring environmental changes over both global and local scales. On a basis of the limited high spatial-resolution observations, challenged studies called spatiotemporal fusion have been developed for generating high spatiotemporal images through employing other auxiliary low spatial-resolution data while with high-frequency observations. However, a majority of spatiotemporal fusion approaches yield to satisfactory assumption, empirical but unstable parameters, low accuracy or inefficient performance. Although the spatiotemporal fusion methodology via sparse representation theory has advantage in capturing reflectance changes, stability and execution efficiency (even more efficient when overcomplete dictionaries have been pre-trained), the retrieval of high-accuracy dictionary and its response to fusion results are still pending issues. In this paper, we employ additional image pairs (here each image-pair includes a Landsat Operational Land Imager and a Moderate Resolution Imaging Spectroradiometer acquisitions covering the partial area of Baotou, China) only into the coupled dictionary training process based on K-SVD (K-means Singular Value Decomposition) algorithm, and attempt to improve the fusion results of two existing sparse representation based fusion models (respectively utilizing one and two available image-pair). The results show that more eligible image pairs are probably related to a more accurate overcomplete dictionary, which generally indicates a better image representation, and is then contribute to an effective fusion performance in case that the added image-pair has similar seasonal aspects and image spatial structure features to the original image-pair. It is, therefore, reasonable to construct multi-dictionary training pattern for generating a series of high spatial resolution images based on limited acquisitions.

Keywords: spatiotemporal fusion, sparse representation, K-SVD algorithm, dictionary learning

Procedia PDF Downloads 232
1932 Policy Recommendations for Reducing CO2 Emissions in Kenya's Electricity Generation, 2015-2030

Authors: Paul Kipchumba

Abstract:

Kenya is an East African Country lying at the Equator. It had a population of 46 million in 2015 with an annual growth rate of 2.7%, making a population of at least 65 million in 2030. Kenya’s GDP in 2015 was about 63 billion USD with per capita GDP of about 1400 USD. The rural population is 74%, whereas urban population is 26%. Kenya grapples with not only access to energy but also with energy security. There is direct correlation between economic growth, population growth, and energy consumption. Kenya’s energy composition is at least 74.5% from renewable energy with hydro power and geothermal forming the bulk of it; 68% from wood fuel; 22% from petroleum; 9% from electricity; and 1% from coal and other sources. Wood fuel is used by majority of rural and poor urban population. Electricity is mostly used for lighting. As of March 2015 Kenya had installed electricity capacity of 2295 MW, making a per capital electricity consumption of 0.0499 KW. The overall retail cost of electricity in 2015 was 0.009915 USD/ KWh (KES 19.85/ KWh), for installed capacity over 10MW. The actual demand for electricity in 2015 was 3400 MW and the projected demand in 2030 is 18000 MW. Kenya is working on vision 2030 that aims at making it a prosperous middle income economy and targets 23 GW of generated electricity. However, cost and non-cost factors affect generation and consumption of electricity in Kenya. Kenya does not care more about CO2 emissions than on economic growth. Carbon emissions are most likely to be paid by future costs of carbon emissions and penalties imposed on local generating companies by sheer disregard of international law on C02 emissions and climate change. The study methodology was a simulated application of carbon tax on all carbon emitting sources of electricity generation. It should cost only USD 30/tCO2 tax on all emitting sources of electricity generation to have solar as the only source of electricity generation in Kenya. The country has the best evenly distributed global horizontal irradiation. Solar potential after accounting for technology efficiencies such as 14-16% for solar PV and 15-22% for solar thermal is 143.94 GW. Therefore, the paper recommends adoption of solar power for generating all electricity in Kenya in order to attain zero carbon electricity generation in the country.

Keywords: co2 emissions, cost factors, electricity generation, non-cost factors

Procedia PDF Downloads 334
1931 Development of Automated Quality Management System for the Management of Heat Networks

Authors: Nigina Toktasynova, Sholpan Sagyndykova, Zhanat Kenzhebayeva, Maksat Kalimoldayev, Mariya Ishimova, Irbulat Utepbergenov

Abstract:

Any business needs a stable operation and continuous improvement, therefore it is necessary to constantly interact with the environment, to analyze the work of the enterprise in terms of employees, executives and consumers, as well as to correct any inconsistencies of certain types of processes and their aggregate. In the case of heat supply organizations, in addition to suppliers, local legislation must be considered which often is the main regulator of pricing of services. In this case, the process approach used to build a functional organizational structure in these types of businesses in Kazakhstan is a challenge not only in the implementation, but also in ways of analyzing the employee's salary. To solve these problems, we investigated the management system of heating enterprise, including strategic planning based on the balanced scorecard (BSC), quality management in accordance with the standards of the Quality Management System (QMS) ISO 9001 and analysis of the system based on expert judgment using fuzzy inference. To carry out our work we used the theory of fuzzy sets, the QMS in accordance with ISO 9001, BSC according to the method of Kaplan and Norton, method of construction of business processes according to the notation IDEF0, theory of modeling using Matlab software simulation tools and graphical programming LabVIEW. The results of the work are as follows: We determined possibilities of improving the management of heat-supply plant-based on QMS; after the justification and adaptation of software tool it has been used to automate a series of functions for the management and reduction of resources and for the maintenance of the system up to date; an application for the analysis of the QMS based on fuzzy inference has been created with novel organization of communication software with the application enabling the analysis of relevant data of enterprise management system.

Keywords: balanced scorecard, heat supply, quality management system, the theory of fuzzy sets

Procedia PDF Downloads 341
1930 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 142