Search results for: multivariate linear regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6297

Search results for: multivariate linear regression

117 Monitoring the Production of Large Composite Structures Using Dielectric Tool Embedded Capacitors

Authors: Galatee Levadoux, Trevor Benson, Chris Worrall

Abstract:

With the rise of public awareness on climate change comes an increasing demand for renewable sources of energy. As a result, the wind power sector is striving to manufacture longer, more efficient and reliable wind turbine blades. Currently, one of the leading causes of blade failure in service is improper cure of the resin during manufacture. The infusion process creating the main part of the composite blade structure remains a critical step that is yet to be monitored in real time. This stage consists of a viscous resin being drawn into a mould under vacuum, then undergoing a curing reaction until solidification. Successful infusion assumes the resin fills all the voids and cures completely. Given that the electrical properties of the resin change significantly during its solidification, both the filling of the mould and the curing reaction are susceptible to be followed using dieletrometry. However, industrially available dielectrics sensors are currently too small to monitor the entire surface of a wind turbine blade. The aim of the present research project is to scale up the dielectric sensor technology and develop a device able to monitor the manufacturing process of large composite structures, assessing the conformity of the blade before it even comes out of the mould. An array of flat copper wires acting as electrodes are embedded in a polymer matrix fixed in an infusion mould. A multi-frequency analysis from 1 Hz to 10 kHz is performed during the filling of the mould with an epoxy resin and the hardening of the said resin. By following the variations of the complex admittance Y*, the filling of the mould and curing process are monitored. Results are compared to numerical simulations of the sensor in order to validate a virtual cure-monitoring system. The results obtained by drawing glycerol on top of the copper sensor displayed a linear relation between the wetted length of the sensor and the complex admittance measured. Drawing epoxy resin on top of the sensor and letting it cure at room temperature for 24 hours has provided characteristic curves obtained when conventional interdigitated sensor are used to follow the same reaction. The response from the developed sensor has shown the different stages of the polymerization of the resin, validating the geometry of the prototype. The model created and analysed using COMSOL has shown that the dielectric cure process can be simulated, so long as a sufficient time and temperature dependent material properties can be determined. The model can be used to help design larger sensors suitable for use with full-sized blades. The preliminary results obtained with the sensor prototype indicate that the infusion and curing process of an epoxy resin can be followed with the chosen configuration on a scale of several decimeters. Further work is to be devoted to studying the influence of the sensor geometry and the infusion parameters on the results obtained. Ultimately, the aim is to develop a larger scale sensor able to monitor the flow and cure of large composite panels industrially.

Keywords: composite manufacture, dieletrometry, epoxy, resin infusion, wind turbine blades

Procedia PDF Downloads 166
116 Academic Achievement in Argentinean College Students: Major Findings in Psychological Assessment

Authors: F. Uriel, M. M. Fernandez Liporace

Abstract:

In the last decade, academic achievement in higher education has become a topic of agenda in Argentina, regarding the high figures of adjustment problems, academic failure and dropout, and the low graduation rates in the context of massive classes and traditional teaching methods. Psychological variables, such as perceived social support, academic motivation and learning styles and strategies have much to offer since their measurement by tests allows a proper diagnose of their influence on academic achievement. Framed in a major research, several studies analysed multiple samples, totalizing 5135 students attending Argentinean public universities. The first goal was aimed at the identification of statistically significant differences in psychological variables -perceived social support, learning styles, learning strategies, and academic motivation- by age, gender, and degree of academic advance (freshmen versus sophomores). Thus, an inferential group differences study for each psychological dependent variable was developed by means of student’s T tests, given the features of data distribution. The second goal, aimed at examining associations between the four psychological variables on the one hand, and academic achievement on the other, was responded by correlational studies, calculating Pearson’s coefficients, employing grades as the quantitative indicator of academic achievement. The positive and significant results that were obtained led to the formulation of different predictive models of academic achievement which had to be tested in terms of adjustment and predictive power. These models took the four psychological variables above mentioned as predictors, using regression equations, examining predictors individually, in groups of two, and together, analysing indirect effects as well, and adding the degree of academic advance and gender, which had shown their importance within the first goal’s findings. The most relevant results were: first, gender showed no influence on any dependent variable. Second, only good achievers perceived high social support from teachers, and male students were prone to perceive less social support. Third, freshmen exhibited a pragmatic learning style, preferring unstructured environments, the use of examples and simultaneous-visual processing in learning, whereas sophomores manifest an assimilative learning style, choosing sequential and analytic processing modes. Despite these features, freshmen have to deal with abstract contents and sophomores, with practical learning situations due to study programs in force. Fifth, no differences in academic motivation were found between freshmen and sophomores. However, the latter employ a higher number of more efficient learning strategies. Sixth, freshmen low achievers lack intrinsic motivation. Seventh, models testing showed that social support, learning styles and academic motivation influence learning strategies, which affect academic achievement in freshmen, particularly males; only learning styles influence achievement in sophomores of both genders with direct effects. These findings led to conclude that educational psychologists, education specialists, teachers, and universities must plan urgent and major changes. These must be applied in renewed and better study programs, syllabi and classes, as well as tutoring and training systems. Such developments should be targeted to the support and empowerment of students in their academic pathways, and therefore to the upgrade of learning quality, especially in the case of freshmen, male freshmen, and low achievers.

Keywords: academic achievement, academic motivation, coping, learning strategies, learning styles, perceived social support

Procedia PDF Downloads 122
115 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 162
114 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 246
113 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 289
112 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics

Authors: Jingsi Li, Neil S. Ferguson

Abstract:

Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.

Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management

Procedia PDF Downloads 112
111 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania

Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea

Abstract:

A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.

Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality

Procedia PDF Downloads 128
110 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 139
109 Piezotronic Effect on Electrical Characteristics of Zinc Oxide Varistors

Authors: Nadine Raidl, Benjamin Kaufmann, Michael Hofstätter, Peter Supancic

Abstract:

If polycrystalline ZnO is properly doped and sintered under very specific conditions, it shows unique electrical properties, which are indispensable for today’s electronic industries, where it is used as the number one overvoltage protection material. Under a critical voltage, the polycrystalline bulk exhibits high electrical resistance but becomes suddenly up to twelve magnitudes more conductive if this voltage limit is exceeded (i.e., varistor effect). It is known that these peerless properties have their origin in the grain boundaries of the material. Electric charge is accumulated in the boundaries, causing a depletion layer in their vicinity and forming potential barriers (so-called Double Schottky Barriers, or DSB) which are responsible for the highly non-linear conductivity. Since ZnO is a piezoelectric material, mechanical stresses induce polarisation charges that modify the DSB heights and as a result the global electrical characteristics (i.e., piezotronic effect). In this work, a finite element method was used to simulate emerging stresses on individual grains in the bulk. Besides, experimental efforts were made to testify a coherent model that could explain this influence. Electron back scattering diffraction was used to identify grain orientations. With the help of wet chemical etching, grain polarization was determined. Micro lock-in infrared thermography (MLIRT) was applied to detect current paths through the material, and a micro 4-point probes method system (M4PPS) was employed to investigate current-voltage characteristics between single grains. Bulk samples were tested under uniaxial pressure. It was found that the conductivity can increase by up to three orders of magnitude with increasing stress. Through in-situ MLIRT, it could be shown that this effect is caused by the activation of additional current paths in the material. Further, compressive tests were performed on miniaturized samples with grain paths containing solely one or two grain boundaries. The tests evinced both an increase of the conductivity, as observed for the bulk, as well as a decreased conductivity. This phenomenon has been predicted theoretically and can be explained by piezotronically induced surface charges that have an impact on the DSB at the grain boundaries. Depending on grain orientation and stress direction, DSB can be raised or lowered. Also, the experiments revealed that the conductivity within one single specimen can increase and decrease, depending on the current direction. This novel finding indicates the existence of asymmetric Double Schottky Barriers, which was furthermore proved by complementary methods. MLIRT studies showed that the intensity of heat generation within individual current paths is dependent on the direction of the stimulating current. M4PPS was used to study the relationship between the I-V characteristics of single grain boundaries and grain orientation and revealed asymmetric behavior for very specific orientation configurations. A new model for the Double Schottky Barrier, taking into account the natural asymmetry and explaining the experimental results, will be given.

Keywords: Asymmetric Double Schottky Barrier, piezotronic, varistor, zinc oxide

Procedia PDF Downloads 267
108 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 265
107 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 168
106 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 164
105 Wind Tunnel Tests on Ground-Mounted and Roof-Mounted Photovoltaic Array Systems

Authors: Chao-Yang Huang, Rwey-Hua Cherng, Chung-Lin Fu, Yuan-Lung Lo

Abstract:

Solar energy is one of the replaceable choices to reduce the CO2 emission produced by conventional power plants in the modern society. As an island which is frequently visited by strong typhoons and earthquakes, it is an urgent issue for Taiwan to make an effort in revising the local regulations to strengthen the safety design of photovoltaic systems. Currently, the Taiwanese code for wind resistant design of structures does not have a clear explanation on photovoltaic systems, especially when the systems are arranged in arrayed format. Furthermore, when the arrayed photovoltaic system is mounted on the rooftop, the approaching flow is significantly altered by the building and led to different pressure pattern in the different area of the photovoltaic system. In this study, L-shape arrayed photovoltaic system is mounted on the ground of the wind tunnel and then mounted on the building rooftop. The system is consisted of 60 PV models. Each panel model is equivalent to a full size of 3.0 m in depth and 10.0 m in length. Six pressure taps are installed on the upper surface of the panel model and the other six are on the bottom surface to measure the net pressures. Wind attack angle is varied from 0° to 360° in a 10° interval for the worst concern due to wind direction. The sampling rate of the pressure scanning system is set as high enough to precisely estimate the peak pressure and at least 20 samples are recorded for good ensemble average stability. Each sample is equivalent to 10-minute time length in full scale. All the scale factors, including timescale, length scale, and velocity scale, are properly verified by similarity rules in low wind speed wind tunnel environment. The purpose of L-shape arrayed system is for the understanding the pressure characteristics at the corner area. Extreme value analysis is applied to obtain the design pressure coefficient for each net pressure. The commonly utilized Cook-and-Mayne coefficient, 78%, is set to the target non-exceedance probability for design pressure coefficients under Gumbel distribution. Best linear unbiased estimator method is utilized for the Gumbel parameter identification. Careful time moving averaging method is also concerned in data processing. Results show that when the arrayed photovoltaic system is mounted on the ground, the first row of the panels reveals stronger positive pressure than that mounted on the rooftop. Due to the flow separation occurring at the building edge, the first row of the panels on the rooftop is most in negative pressures; the last row, on the other hand, shows positive pressures because of the flow reattachment. Different areas also have different pressure patterns, which corresponds well to the regulations in ASCE7-16 describing the area division for design values. Several minor observations are found according to parametric studies, such as rooftop edge effect, parapet effect, building aspect effect, row interval effect, and so on. General comments are then made for the proposal of regulation revision in Taiwanese code.

Keywords: aerodynamic force coefficient, ground-mounted, roof-mounted, wind tunnel test, photovoltaic

Procedia PDF Downloads 138
104 Strategies for Urban-Architectural Design for the Sustainable Recovery of the Huayla Stuary in Puerto Bolivar, Machala-Ecuador

Authors: Soledad Coronel Poma, Lorena Alvarado Rodriguez

Abstract:

The purpose of this project is to design public space through urban-architectural strategies that help to the sustainable recovery of the Huayla estuary and the revival of tourism in this area. This design considers other sustainable and architectural ideas used in similar cases, along with national and international regulations for saving shorelines in danger. To understand the situation of this location, Puerto Bolivar is the main port of the Province of El Oro and of the south of the country, where 90,000 national and foreign tourists pass through all year round. For that reason, a physical-urban, social, and environmental analysis of the area was carried out through surveys and conversations with the community. This analysis showed that around 70% of people feel unsatisfied and concerned about the estuary and its surroundings. Crime, absence of green areas, bad conservation of shorelines, lack of tourists, poor commercial infrastructure, and the spread of informal commerce are the main issues to be solved. As an intervention project whose main goal is that residents and tourists have contact with native nature and enjoy doing local activities, three main strategies: mobility, ecology, and urban –architectural are proposed to recover the estuary and its surroundings. First of all, the design of this public space is based on turning the estuary location into a linear promenade that could be seen as a tourist corridor, which would help to reduce pollution, increase green spaces and improve tourism. Another strategy aims to improve the economy of the community through some local activities like fishing and sailing and the commerce of fresh seafood, both raw products and in restaurants. Furthermore, in support of the environmental approach, some houses are rebuilt as sustainable houses using local materials and rearranged into blocks closer to the commercial area. Finally, the planning incorporates the use of many plants such as palms, sameness trees, and mangroves around the area to encourage people to get in touch with nature. The results of designing this space showed an increase in the green area per inhabitant index. It went from 1.69 m²/room to 10.48 m²/room, with 12 096 m² of green corridors and the incorporation of 5000 m² of mangroves at the shoreline. Additionally, living zones also increased with the creation of green areas taking advantage of the existing nature and implementing restaurants and recreational spaces. Moreover, the relocation of houses and buildings helped to free estuary's shoreline, so people are now in more comfortable places closer to their workplaces. Finally, dock spaces are increased, reaching the capacity of the boats and canoes, helping to organize the area in the estuary. To sum up, this project searches the improvement of the estuary environment with its shoreline and surroundings that include the vegetation, infrastructure and people with their local activities, achieving a better quality of life, attraction of tourism, reduction of pollution and finally getting a full recovered estuary as a natural ecosystem.

Keywords: recover, public space, stuary, sustainable

Procedia PDF Downloads 147
103 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 292
102 Project Management and International Development: Competencies for International Assignment

Authors: M. P. Leroux, C. Coulombe

Abstract:

Projects are popular vehicles through which international aid is delivered in developing countries. To achieve their objectives, many northern organizations develop projects with local partner organizations in the developing countries through technical assistance projects. International aid and international development projects precisely have long been criticized for poor results although billions are spent every year. Little empirical research in the field of project management has the focus on knowledge transfer in international development context. This paper focuses particularly on personal dimensions of international assignees participating in project within local team members in the host country. We propose to explore the possible links with a human resource management perspective in order to shed light on the less research problematic of knowledge transfer in development cooperation projects. The process leading to capacity building being far complex, involving multiple dimensions and far from being linear, we propose here to assess if traditional research on expatriate in multinational corporations pertain to the field of project management in developing countries. The following question is addressed: in the context of international development project cooperation, what personal determinants should the selection process focus when looking to fill a technical assistance position in a developing country? To answer that question, we first reviewed the literature on expatriate in the context of inter organizational knowledge transfer. Second, we proposed a theoretical framework combining perspectives of development studies and management to explore if parallels can be draw between traditional international assignment and technical assistance project assignment in developing countries. We conducted an exploratory study using case studies from technical assistance initiatives led in Haiti, a country in Central America. Data were collected from multiple sources following qualitative study research methods. Direct observations in the field were allowed by local leaders of six organization; individual interviews with present and past international assignees, individual interview with local team members, and focus groups were organized in order to triangulate information collected. Contrary from empirical research on knowledge transfer in multinational corporations, results tend to show that technical expertise rank well behind many others characteristics. Results tend to show the importance of soft skills, as a prerequisite to succeed in projects where local team have to collaborate. More importantly, international assignees who were talking knowledge sharing instead of knowledge transfer seemed to feel more satisfied at the end of their mandate than the others. Reciprocally, local team members who perceived to have participated in a project with an expat looking to share instead of aiming to transfer knowledge seemed to describe the results of project in more positive terms than the others. Results obtained from this exploratory study open the way for a promising research agenda in the field of project management. It emphasises the urgent need to achieve a better understanding on the complex set of soft skills project managers or project chiefs would benefit to develop, in particular, the ability to absorb knowledge and the willingness to share one’s knowledge.

Keywords: international assignee, international project cooperation, knowledge transfer, soft skills

Procedia PDF Downloads 142
101 The Incident of Concussion across Popular American Youth Sports: A Retrospective Review

Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin H. McCleery

Abstract:

Introduction: A leading cause of emergency room visits among youth (in the United States), is sports-related traumatic brain injuries. Mild traumatic brain injuries (mTBIs), also called concussions, are caused by linear and/or angular acceleration experienced at the head and represent an increasing societal burden. Due to the developing nature of the brain in youth, there is a great risk for long-term neuropsychological deficiencies following a concussion. Accordingly, the purpose of this paper is to investigate incidence rates of concussion across gender for the five most common youth sports in the United States. These include basketball, track and field, soccer, baseball (boys), softball (girls), football (boys), and volleyball (girls). Methods: A PubMed search was performed for four search themes combined. The first theme identified the outcomes (concussion, brain injuries, mild traumatic brain injury, etc.). The second theme identified the sport (American football, soccer, basketball, softball, volleyball, track, and field, etc.). The third theme identified the population (adolescence, children, youth, boys, girls). The last theme identified the study design (prevalence, frequency, incidence, prospective). Ultimately, 473 studies were surveyed, with 15 fulfilling the criteria: prospective study presenting original data and incidence of concussion in the relevant youth sport. The following data were extracted from the selected studies: population age, total study population, total athletic exposures (AE) and incidence rate per 1000 athletic exposures (IR/1000). Two One-Way ANOVA and a Tukey’s post hoc test were conducted using SPSS. Results: From the 15 selected studies, statistical analysis revealed the incidence of concussion per 1000 AEs across the considered sports ranged from 0.014 (girl’s track and field) to 0.780 (boy’s football). Average IR/1000 across all sports was 0.483 and 0.268 for boys and girls, respectively; this difference in IR was found to be statistically significant (p=0.013). Tukey’s post hoc test showed that football had significantly higher IR/1000 than boys’ basketball (p=0.022), soccer (p=0.033) and track and field (p=0.026). No statistical difference was found for concussion incidence between girls’ sports. Removal of football was found to lower the IR/1000 for boys without a statistical difference (p=0.101) compared to girls. Discussion: Football was the only sport showing a statistically significant difference in concussion incidence rate relative to other sports (within gender). Males were overall more likely to be concussed than females when football was included (1.8x), whereas concussion was more likely for females when football was excluded. While the significantly higher rate of concussion in football is not surprising because of the nature and rules of the sport, it is concerning that research has shown higher incidence of concussion in practices than games. Interestingly, findings indicate that girls’ sports are more concussive overall when football is removed. This appears to counter the common notion that boys’ sports are more physically taxing and dangerous. Future research should focus on understanding the concussive mechanisms of injury in each sport to enable effective rule changes.

Keywords: gender, football, soccer, traumatic brain injury

Procedia PDF Downloads 141
100 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population

Authors: Ye Xue, Zhenhua Deng

Abstract:

Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.

Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool

Procedia PDF Downloads 58
99 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 336
98 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa

Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini

Abstract:

Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.

Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time

Procedia PDF Downloads 152
97 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices

Authors: Kaustav Mukherjee

Abstract:

In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parameters

Keywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss

Procedia PDF Downloads 132
96 Effect of Radioprotectors on DNA Repair Enzyme and Survival of Gamma-Irradiated Cell Division Cycle Mutants of Saccharomyces pombe

Authors: Purva Nemavarkar, Badri Narain Pandey, Jitendra Kumar

Abstract:

Introduction: The objective was to understand the effect of various radioprotectors on DNA damage repair enzyme and survival in gamma-irradiated wild and cdc mutants of S. pombe (fission yeast) cultured under permissive and restrictive conditions. DNA repair process, as influenced by radioprotectors, was measured by activity of DNA polymerase in the cells. The use of single cell gel electrophoresis assay (SCGE) or Comet Assay to follow gamma-irradiation induced DNA damage and effect of radioprotectors was employed. In addition, studying the effect of caffeine at different concentrations on S-phase of cell cycle was also delineated. Materials and Methods: S. pombe cells grown at permissive temperature (250C) and/or restrictive temperature (360C) were followed by gamma-radiation. Percentage survival and activity of DNA Polymerase (yPol II) were determined after post-irradiation incubation (5 h) with radioprotectors such as Caffeine, Curcumin, Disulphiram, and Ellagic acid (the dose depending on individual D 37 values). The gamma-irradiated yeast cells (with and without the radioprotectors) were spheroplasted by enzyme glusulase and subjected to electrophoresis. Radio-resistant cells were obtained by arresting cells in S-phase using transient treatment of hydroxyurea (HU) and studying the effect of caffeine at different concentrations on S-phase of cell cycle. Results: The mutants of S. pombe showed insignificant difference in survival when grown under permissive conditions. However, growth of these cells under restrictive temperature leads to arrest in specific phases of cell cycle in different cdc mutants (cdc10: G1 arrest, cdc22: early S arrest, cdc17: late S arrest, cdc25: G2 arrest). All the cdc mutants showed decrease in survival after gamma radiation when grown at permissive and restrictive temperatures. Inclusion of the radioprotectors at respective concentrations during post irradiation incubation showed increase in survival of cells. Activity of DNA polymerase enzyme (yPol II) was increased significantly in cdc mutant cells exposed to gamma-radiation. Following SCGE, a linear relationship was observed between doses of irradiation and the tail moments of comets. The radioprotection of the fission yeast by radioprotectors can be seen by the reduced tail moments of the yeast comets. Caffeine also exhibited its radio-protective ability in radio-resistant S-phase cells obtained after HU treatment. Conclusions: The radioprotectors offered notable radioprotection in cdc mutants when added during irradiation. The present study showed activation of DNA damage repair enzyme (yPol II) and an increase in survival after treatment of radioprotectors in gamma irradiated wild type and cdc mutants of S. pombe cells. Results presented here showed feasibility of applying SCGE in fission yeast to follow DNA damage and radioprotection at high doses, which are not feasible with other eukaryotes. Inclusion of caffeine at 1mM concentration to S phase cells offered protection and did not decrease the cell viability. It can be proved that at minimal concentration, caffeine offered marked radioprotection.

Keywords: radiation protection, cell cycle, fission yeast, comet assay, s-phase, DNA repair, radioprotectors, caffeine, curcumin, SCGE

Procedia PDF Downloads 113
95 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow

Authors: Masood Otarod, Ronald M. Supkowski

Abstract:

This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.

Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations

Procedia PDF Downloads 269
94 Living in the Edge: Crisis in Indian Tea Industry and Social Deprivation of Tea Garden Workers in Dooars Region of India

Authors: Saraswati Kerketta

Abstract:

Tea industry is one of the oldest organised sector of India. It employs roughly 1.5 million people directly. Since the last decade Indian tea industry, especially in the northern region is experiencing worst crisis in the post-independence period. Due to many reason the prices of tea show steady decline. The workers are paid one of the lowest wage in tea industry in the world (1.5$ a day) below the UN's $2 a day for extreme poverty. The workers rely on addition benefits from plantation which includes food, housing and medical facilities. These have been effective means of enslavement of generations of labourers by the owners. There is hardly any change in the tea estates where the owners determine the fate of workers. When the tea garden is abandoned or is closed all the facilities disappear immediately. The workers are the descendants of tribes from central India also known as 'tea tribes'. Alienated from their native place, the geographical and social isolation compounded their vulnerability of these people. The economy of the region being totally dependent on tea has resulted in absolute unemployment for the workers of these tea gardens. With no other livelihood and no land to grow food, thousands of workers faced hunger and starvation. The Plantation Labour Act which ensures the decent working and living condition is violated continuously. The labours are forced to migrate and are also exposed to the risk of human trafficking. Those who are left behind suffers from starvation, malnutrition and disease. The condition in the sick tea plantation is no better. Wage are not paid regularly, subsidised food, fuel are also not supplied properly. Health care facilities are in very bad shape. Objectives: • To study the socio-cultural and demographic characteristics of the tea garden labourers in the study area. • To examine the social situation of workers in sick estates in dooars region. • To assess the magnitude of deprivation the impact of economic crisis on abandoned and closed tea estates in the region. Data Base: The study is based on data collected from field survey. Methods: Quantative: Cross-Tabulation, Regression analysis. Qualitative: Household Survey, Focussed Group Discussion, In-depth interview of key informants. Findings: Purchasing power parity has declined since in last three decades. There has been many fold increase in migration. Males migrates long distance towards central and west and south India. Females and children migrates both long and short distance. No one has reported to migrate back to the place of origin of their ancestors. Migrant males work mostly as construction labourers and as factory workers whereas females and children work as domestic help and construction labourers. In about 37 cases either they haven't contacted their families in last six months or are not traceable. The families with single earning members are more likely to migrate. Burden of disease and the duration of sickness, abandonment and closure of plantation are closely related. Death tolls are likely to rise 1.5 times in sick tea gardens and three times in closed tea estates. Sixty percent of the people are malnourished in the sick tea gardens and more than eighty five per cent in abandoned and sick tea gardens.

Keywords: migration, trafficking, starvation death, tea garden workers

Procedia PDF Downloads 383
93 Benefits of High Power Impulse Magnetron Sputtering (HiPIMS) Method for Preparation of Transparent Indium Gallium Zinc Oxide (IGZO) Thin Films

Authors: Pavel Baroch, Jiri Rezek, Michal Prochazka, Tomas Kozak, Jiri Houska

Abstract:

Transparent semiconducting amorphous IGZO films have attracted great attention due to their excellent electrical properties and possible utilization in thin film transistors or in photovoltaic applications as they show 20-50 times higher mobility than that of amorphous silicon. It is also known that the properties of IGZO films are highly sensitive to process parameters, especially to oxygen partial pressure. In this study we have focused on the comparison of properties of transparent semiconducting amorphous indium gallium zinc oxide (IGZO) thin films prepared by conventional sputtering methods and those prepared by high power impulse magnetron sputtering (HiPIMS) method. Furthermore we tried to optimize electrical and optical properties of the IGZO thin films and to investigate possibility to apply these coatings on thermally sensitive flexible substrates. We employed dc, pulsed dc, mid frequency sine wave and HiPIMS power supplies for magnetron deposition. Magnetrons were equipped with sintered ceramic InGaZnO targets. As oxygen vacancies are considered to be the main source of the carriers in IGZO films, it is expected that with the increase of oxygen partial pressure number of oxygen vacancies decreases which results in the increase of film resistivity. Therefore in all experiments we focused on the effect of oxygen partial pressure, discharge power and pulsed power mode on the electrical, optical and mechanical properties of IGZO thin films and also on the thermal load deposited to the substrate. As expected, we have observed a very fast transition between low- and high-resistivity films depending on oxygen partial pressure when deposition using conventional sputtering methods/power supplies have been utilized. Therefore we established and utilized HiPIMS sputtering system for enlargement of operation window for better control of IGZO thin film properties. It is shown that with this system we are able to effectively eliminate steep transition between low and high resistivity films exhibited by DC mode of sputtering and the electrical resistivity can be effectively controlled in the wide resistivity range of 10-² to 10⁵ Ω.cm. The highest mobility of charge carriers (up to 50 cm2/V.s) was obtained at very low oxygen partial pressures. Utilization of HiPIMS also led to significant decrease in thermal load deposited to the substrate which is beneficial for deposition on the thermally sensitive and flexible polymer substrates. Deposition rate as a function of discharge power and oxygen partial pressure was also systematically investigated and the results from optical, electrical and structure analysis will be discussed in detail. Most important result which we have obtained demonstrates almost linear control of IGZO thin films resistivity with increasing of oxygen partial pressure utilizing HiPIMS mode of sputtering and highly transparent films with low resistivity were prepared already at low pO2. It was also found that utilization of HiPIMS technique resulted in significant improvement of surface smoothness in reactive mode of sputtering (with increasing of oxygen partial pressure).

Keywords: charge carrier mobility, HiPIMS, IGZO, resistivity

Procedia PDF Downloads 297
92 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity

Procedia PDF Downloads 184
91 Exploring the Dose-Response Association of Lifestyle Behaviors and Mental Health among High School Students in the US: A Secondary Analysis of 2021 Adolescent Behaviors and Experiences Survey Data

Authors: Layla Haidar, Shari Esquenazi-Karonika

Abstract:

Introduction: Mental health includes one’s emotional, psychological, and interpersonal well-being; it ranges from “good” to “poor” on a continuum. At the individual-level, it affects how a person thinks, feels, and acts. Moreover, it determines how they cope with stress, relate to others, and interface with their surroundings. Research has yielded that mental health is directly related with short- and long-term physical health (including chronic disease), health risk behaviors, education-level, employment, and social relationships. As is the case with physical conditions like diabetes, heart disease, and cancer, mitigating the behavioral and genetic risks of debilitating mental health conditions like anxiety and depression can nurture a healthier quality of mental health throughout one’s life. In order to maximize the benefits of prevention, it is important to identify modifiable risks and develop protective habits earlier in life. Methods: The Adolescent Behaviors and Experiences Survey (ABES) dataset was used for this study. The ABES survey was administered to high school students (9th-12th grade) during January 2021- June 2021 by the Centers for Disease Control and Prevention (CDC). The data was analyzed to identify any associations between feelings of sadness, hopelessness, or increased suicidality among high school students with relation to their participation on one or more sports teams and their average daily consumed screen time. Data was analyzed using descriptive and multivariable analytic techniques. A multinomial logistic regression of each variable was conducted to examine if there was an association, while controlling for grade-level, sex, and race. Results: The findings from this study are insightful for administrators and policymakers who wish to address mounting concerns related to student mental health. The study revealed that compared to a student who participated on zero sports teams, students who participated in 1 or more sports teams showed a significantly increased risk of depression (p<0.05). Conversely, the rate of depression in students was significantly less in those who consumed 5 or more hours of screen time per day, compared to those who consumed less than 1 hour per day of screen time (p<0.05). Conclusion: These findings are informative and highlight the importance of understanding the nuances of student participation on sports teams (e.g., physical exertion, social dynamics of team, and the level of competitiveness within the sport). Likewise, the context of an individual’s screen time (e.g., social media, engaging in team-based video games, or watching television) can inform parental or school-based policies about screen time activity. Although physical activity has been proven to be important for emotional and physical well-being of youth, playing on multiple teams could have negative consequences on the emotional state of high school students potentially due to fatigue, overtraining, and injuries. Existing literature has highlighted the negative effects of screen time; however, further research needs to consider the type of screen-based consumption to better understand its effects on mental health.

Keywords: behavioral science, mental health, adolescents, prevention

Procedia PDF Downloads 105
90 Global Evidence on the Seasonality of Enteric Infections, Malnutrition, and Livestock Ownership

Authors: Aishwarya Venkat, Anastasia Marshak, Ryan B. Simpson, Elena N. Naumova

Abstract:

Livestock ownership is simultaneously linked to improved nutritional status through increased availability of animal-source protein, and increased risk of enteric infections through higher exposure to contaminated water sources. Agrarian and agro-pastoral households, especially those with cattle, goats, and sheep, are highly dependent on seasonally various environmental conditions, which directly impact nutrition and health. This study explores global spatiotemporally explicit evidence regarding the relationship between livestock ownership, enteric infections, and malnutrition. Seasonal and cyclical fluctuations, as well as mediating effects, are further examined to elucidate health and nutrition outcomes of individual and communal livestock ownership. The US Agency for International Development’s Demographic and Health Surveys (DHS) and the United Nations International Children's Emergency Fund’s Multi-Indicator Cluster Surveys (MICS) provide valuable sources of household-level information on anthropometry, asset ownership, and disease outcomes. These data are especially important in data-sparse regions, where surveys may only be conducted in the aftermath of emergencies. Child-level disease history, anthropometry, and household-level asset ownership information have been collected since DHS-V (2003-present) and MICS-III (2005-present). This analysis combines over 15 years of survey data from DHS and MICS to study 2,466,257 children under age five from 82 countries. Subnational (administrative level 1) measures of diarrhea prevalence, mean livestock ownership by type, mean and median anthropometric measures (height for age, weight for age, and weight for height) were investigated. Effects of several environmental, market, community, and household-level determinants were studied. Such covariates included precipitation, temperature, vegetation, the market price of staple cereals and animal source proteins, conflict events, livelihood zones, wealth indices and access to water, sanitation, hygiene, and public health services. Children aged 0 – 6 months, 6 months – 2 years, and 2 – 5 years of age were compared separately. All observations were standardized to interview day of year, and administrative units were harmonized for consistent comparisons over time. Geographically weighted regressions were constructed for each outcome and subnational unit. Preliminary results demonstrate the importance of accounting for seasonality in concurrent assessments of malnutrition and enteric infections. Household assets, including livestock, often determine the intensity of these outcomes. In many regions, livestock ownership affects seasonal fluxes in malnutrition and enteric infections, which are also directly affected by environmental and local factors. Regression analysis demonstrates the spatiotemporal variability in nutrition outcomes due to a variety of causal factors. This analysis presents a synthesis of evidence from global survey data on the interrelationship between enteric infections, malnutrition, and livestock. These results provide a starting point for locally appropriate interventions designed to address this nexus in a timely manner and simultaneously improve health, nutrition, and livelihoods.

Keywords: diarrhea, enteric infections, households, livestock, malnutrition, seasonality

Procedia PDF Downloads 126
89 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System

Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim

Abstract:

General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.

Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms

Procedia PDF Downloads 390
88 Analysis of Elastic-Plastic Deformation of Reinforced Concrete Shear-Wall Structures under Earthquake Excitations

Authors: Oleg Kabantsev, Karomatullo Umarov

Abstract:

The engineering analysis of earthquake consequences demonstrates a significantly different level of damage to load-bearing systems of different types. Buildings with reinforced concrete columns and separate shear-walls receive the highest level of damage. Traditional methods for predicting damage under earthquake excitations do not provide an answer to the question about the reasons for the increased vulnerability of reinforced concrete frames with shear-walls bearing systems. Thus, the study of the problem of formation and accumulation of damages in the structures reinforced concrete frame with shear-walls requires the use of new methods of assessment of the stress-strain state, as well as new approaches to the calculation of the distribution of forces and stresses in the load-bearing system based on account of various mechanisms of elastic-plastic deformation of reinforced concrete columns and walls. The results of research into the processes of non-linear deformation of structures with a transition to destruction (collapse) will allow to substantiate the characteristics of limit states of various structures forming an earthquake-resistant load-bearing system. The research of elastic-plastic deformation processes of reinforced concrete structures of frames with shear-walls is carried out on the basis of experimentally established parameters of limit deformations of concrete and reinforcement under dynamic excitations. Limit values of deformations are defined for conditions under which local damages of the maximum permissible level are formed in constructions. The research is performed by numerical methods using ETABS software. The research results indicate that under earthquake excitations, plastic deformations of various levels are formed in various groups of elements of the frame with the shear-wall load-bearing system. During the main period of seismic effects in the shear-wall elements of the load-bearing system, there are insignificant volumes of plastic deformations, which are significantly lower than the permissible level. At the same time, plastic deformations are formed in the columns and do not exceed the permissible value. At the final stage of seismic excitations in shear-walls, the level of plastic deformations reaches values corresponding to the plasticity coefficient of concrete , which is less than the maximum permissible value. Such volume of plastic deformations leads to an increase in general deformations of the bearing system. With the specified parameters of the deformation of the shear-walls in concrete columns, plastic deformations exceeding the limiting values develop, which leads to the collapse of such columns. Based on the results presented in this study, it can be concluded that the application seismic-force-reduction factor, common for the all load-bearing system, does not correspond to the real conditions of formation and accumulation of damages in elements of the load-bearing system. Using a single coefficient of seismic-force-reduction factor leads to errors in predicting the seismic resistance of reinforced concrete load-bearing systems. In order to provide the required level of seismic resistance buildings with reinforced concrete columns and separate shear-walls, it is necessary to use values of the coefficient of seismic-force-reduction factor differentiated by types of structural groups.1

Keywords: reinforced concrete structures, earthquake excitation, plasticity coefficients, seismic-force-reduction factor, nonlinear dynamic analysis

Procedia PDF Downloads 206