Search results for: structural structural equation modeling (SEM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8997

Search results for: structural structural equation modeling (SEM)

267 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 134
266 Spatial Mapping of Variations in Groundwater of Taluka Islamkot Thar Using GIS and Field Data

Authors: Imran Aziz Tunio

Abstract:

Islamkot is an underdeveloped sub-district (Taluka) in the Tharparkar district Sindh province of Pakistan located between latitude 24°25'19.79"N to 24°47'59.92"N and longitude 70° 1'13.95"E to 70°32'15.11"E. The Islamkot has an arid desert climate and the region is generally devoid of perennial rivers, canals, and streams. It is highly dependent on rainfall which is not considered a reliable surface water source and groundwater is the only key source of water for many centuries. To assess groundwater’s potential, an electrical resistivity survey (ERS) was conducted in Islamkot Taluka. Groundwater investigations for 128 Vertical Electrical Sounding (VES) were collected to determine the groundwater potential and obtain qualitatively and quantitatively layered resistivity parameters. The PASI Model 16 GL-N Resistivity Meter was used by employing a Schlumberger electrode configuration, with half current electrode spacing (AB/2) ranging from 1.5 to 100 m and the potential electrode spacing (MN/2) from 0.5 to 10 m. The data was acquired with a maximum current electrode spacing of 200 m. The data processing for the delineation of dune sand aquifers involved the technique of data inversion, and the interpretation of the inversion results was aided by the use of forward modeling. The measured geo-electrical parameters were examined by Interpex IX1D software, and apparent resistivity curves and synthetic model layered parameters were mapped in the ArcGIS environment using the inverse Distance Weighting (IDW) interpolation technique. Qualitative interpretation of vertical electrical sounding (VES) data shows the number of geo-electrical layers in the area varies from three to four with different resistivity values detected. Out of 128 VES model curves, 42 nos. are 3 layered, and 86 nos. are 4 layered. The resistivity of the first subsurface layers (Loose surface sand) varied from 16.13 Ωm to 3353.3 Ωm and thickness varied from 0.046 m to 17.52m. The resistivity of the second subsurface layer (Semi-consolidated sand) varied from 1.10 Ωm to 7442.8 Ωm and thickness varied from 0.30 m to 56.27 m. The resistivity of the third subsurface layer (Consolidated sand) varied from 0.00001 Ωm to 3190.8 Ωm and thickness varied from 3.26 m to 86.66 m. The resistivity of the fourth subsurface layer (Silt and Clay) varied from 0.0013 Ωm to 16264 Ωm and thickness varied from 13.50 m to 87.68 m. The Dar Zarrouk parameters, i.e. longitudinal unit conductance S is from 0.00024 to 19.91 mho; transverse unit resistance T from 7.34 to 40080.63 Ωm2; longitudinal resistance RS is from 1.22 to 3137.10 Ωm and transverse resistivity RT from 5.84 to 3138.54 Ωm. ERS data and Dar Zarrouk parameters were mapped which revealed that the study area has groundwater potential in the subsurface.

Keywords: electrical resistivity survey, GIS & RS, groundwater potential, environmental assessment, VES

Procedia PDF Downloads 110
265 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model

Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho

Abstract:

Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.

Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem

Procedia PDF Downloads 294
264 The Quantum Theory of Music and Human Languages

Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: language, music, sciences, quantum entenglement

Procedia PDF Downloads 77
263 Long-Term Exposure, Health Risk, and Loss of Quality-Adjusted Life Expectancy Assessments for Vinyl Chloride Monomer Workers

Authors: Tzu-Ting Hu, Jung-Der Wang, Ming-Yeng Lin, Jin-Luh Chen, Perng-Jy Tsai

Abstract:

The vinyl chloride monomer (VCM) has been classified as group 1 (human) carcinogen by the IARC. Workers exposed to VCM are known associated with the development of the liver cancer and hence might cause economical and health losses. Particularly, for those work for the petrochemical industry have been seriously concerned in the environmental and occupational health field. Considering assessing workers’ health risks and their resultant economical and health losses requires the establishment of long-term VCM exposure data for any similar exposure group (SEG) of interest, the development of suitable technologies has become an urgent and important issue. In the present study, VCM exposures for petrochemical industry workers were determined firstly based on the database of the 'Workplace Environmental Monitoring Information Systems (WEMIS)' provided by Taiwan OSHA. Considering the existence of miss data, the reconstruction of historical exposure techniques were then used for completing the long-term exposure data for SEGs with routine operations. For SEGs with non-routine operations, exposure modeling techniques, together with their time/activity records, were adopted for determining their long-term exposure concentrations. The Bayesian decision analysis (BDA) was adopted for conducting exposure and health risk assessments for any given SEG in the petrochemical industry. The resultant excessive cancer risk was then used to determine the corresponding loss of quality-adjusted life expectancy (QALE). Results show that low average concentrations can be found for SEGs with routine operations (e.g., VCM rectification 0.0973 ppm, polymerization 0.306 ppm, reaction tank 0.33 ppm, VCM recovery 1.4 ppm, control room 0.14 ppm, VCM storage tanks 0.095 ppm and wastewater treatment 0.390 ppm), and the above values were much lower than that of the permissible exposure limit (PEL; 3 ppm) of VCM promulgated in Taiwan. For non-routine workers, though their high exposure concentrations, their low exposure time and frequencies result in low corresponding health risks. Through the consideration of exposure assessment results, health risk assessment results, and QALE results simultaneously, it is concluded that the proposed method was useful for prioritizing SEGs for conducting exposure abatement measurements. Particularly, the obtained QALE results further indicate the importance of reducing workers’ VCM exposures, though their exposures were low as in comparison with the PEL and the acceptable health risk.

Keywords: exposure assessment, health risk assessment, petrochemical industry, quality-adjusted life years, vinyl chloride monomer

Procedia PDF Downloads 195
262 In silico Designing of Imidazo [4,5-b] Pyridine as a Probable Lead for Potent Decaprenyl Phosphoryl-β-D-Ribose 2′-Epimerase (DprE1) Inhibitors as Antitubercular Agents

Authors: Jineetkumar Gawad, Chandrakant Bonde

Abstract:

Tuberculosis (TB) is a major worldwide concern whose control has been exacerbated by HIV, the rise of multidrug-resistance (MDR-TB) and extensively drug resistance (XDR-TB) strains of Mycobacterium tuberculosis. The interest for newer and faster acting antitubercular drugs are more remarkable than any time. To search potent compounds is need and challenge for researchers. Here, we tried to design lead for inhibition of Decaprenyl phosphoryl-β-D-ribose 2′-epimerase (DprE1) enzyme. Arabinose is an essential constituent of mycobacterial cell wall. DprE1 is a flavoenzyme that converts decaprenylphosphoryl-D-ribose into decaprenylphosphoryl-2-keto-ribose, which is intermediate in biosynthetic pathway of arabinose. Latter, DprE2 converts keto-ribose into decaprenylphosphoryl-D-arabinose. We had a selection of 23 compounds from azaindole series for computational study, and they were drawn using marvisketch. Ligands were prepared using Maestro molecular modeling interface, Schrodinger, v10.5. Common pharmacophore hypotheses were developed by applying dataset thresholds to yield active and inactive set of compounds. There were 326 hypotheses were developed. On the basis of survival score, ADRRR (Survival Score: 5.453) was selected. Selected pharmacophore hypotheses were subjected to virtual screening results into 1000 hits. Hits were prepared and docked with protein 4KW5 (oxydoreductase inhibitor) was downloaded in .pdb format from RCSB Protein Data Bank. Protein was prepared using protein preparation wizard. Protein was preprocessed, the workspace was analyzed using force field OPLS 2005. Glide grid was generated by picking single atom in molecule. Prepared ligands were docked with prepared protein 4KW5 using Glide docking. After docking, on the basis of glide score top-five compounds were selected, (5223, 5812, 0661, 0662, and 2945) and the glide docking score (-8.928, -8.534, -8.412, -8.411, -8.351) respectively. There were interactions of ligand and protein, specifically HIS 132, LYS 418, TRY 230, ASN 385. Pi-pi stacking was observed in few compounds with basic Imidazo [4,5-b] pyridine ring. We had basic azaindole ring in parent compounds, but after glide docking, we received compounds with Imidazo [4,5-b] pyridine as a basic ring. That might be the new lead in the process of drug discovery.

Keywords: DprE1 inhibitors, in silico drug designing, imidazo [4, 5-b] pyridine, lead, tuberculosis

Procedia PDF Downloads 154
261 Modeling of the Fermentation Process of Enzymatically Extracted Annona muricata L. Juice

Authors: Calister Wingang Makebe, Wilson Agwanande Ambindei, Zangue Steve Carly Desobgo, Abraham Billu, Emmanuel Jong Nso, P. Nisha

Abstract:

Traditional liquid-state fermentation processes of Annona muricata L. juice can result in fluctuating product quality and quantity due to difficulties in control and scale up. This work describes a laboratory-scale batch fermentation process to produce a probiotic Annona muricata L. enzymatically extracted juice, which was modeled using the Doehlert design with independent extraction factors being incubation time, temperature, and enzyme concentration. It aimed at a better understanding of the traditional process as an initial step for future optimization. Annona muricata L. juice was fermented with L. acidophilus (NCDC 291) (LA), L. casei (NCDC 17) (LC), and a blend of LA and LC (LCA) for 72 h at 37 °C. Experimental data were fitted into mathematical models (Monod, Logistic and Luedeking and Piret models) using MATLAB software, to describe biomass growth, sugar utilization, and organic acid production. The optimal fermentation time was obtained based on cell viability, which was 24 h for LC and 36 h for LA and LCA. The model was particularly effective in estimating biomass growth, reducing sugar consumption, and lactic acid production. The values of the determination coefficient, R2, were 0.9946, 0.9913 and 0.9946, while the residual sum of square error, SSE, was 0.2876, 0.1738 and 0.1589 for LC, LA and LCA, respectively. The growth kinetic parameters included the maximum specific growth rate, µm, which was 0.2876 h-1, 0.1738 h-1 and 0.1589 h-1, as well as the substrate saturation, Ks, with 9.0680 g/L, 9.9337 g/L and 9.0709 g/L respectively for LC, LA and LCA. For the stoichiometric parameters, the yield of biomass based on utilized substrate (YXS) was 50.7932, 3.3940 and 61.0202, and the yield of product based on utilized substrate (YPS) was 2.4524, 0.2307 and 0.7415 for LC, LA, and LCA, respectively. In addition, the maintenance energy parameter (ms) was 0.0128, 0.0001 and 0.0004 with respect to LC, LA and LCA. With the kinetic model proposed by Luedeking and Piret for lactic acid production rate, the growth associated and non-growth associated coefficients were determined as 1.0028 and 0.0109, respectively. The model was demonstrated for batch growth of LA, LC, and LCA in Annona muricata L. juice. The present investigation validates the potential of Annona muricata L. based medium for heightened economical production of a probiotic medium.

Keywords: L. acidophilus, L. casei, fermentation, modelling, kinetics

Procedia PDF Downloads 68
260 Predicting Personality and Psychological Distress Using Natural Language Processing

Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi

Abstract:

Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).

Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality

Procedia PDF Downloads 79
259 Numerical Reproduction of Hemodynamic Change Induced by Acupuncture to ST-36

Authors: Takuya Suzuki, Atsushi Shirai, Takashi Seki

Abstract:

Acupuncture therapy is one of the treatments in traditional Chinese medicine. Recently, some reports have shown the effectiveness of acupuncture. However, its full acceptance has been hindered by the lack of understanding on mechanism of the therapy. Acupuncture applied to Zusanli (ST-36) enhances blood flow volume in superior mesenteric artery (SMA), yielding peripheral vascular resistance – regulated blood flow of SMA dominated by the parasympathetic system and inhibition of sympathetic system. In this study, a lumped-parameter approximation model of blood flow in the systemic arteries was developed. This model was extremely simple, consisting of the aorta, carotid arteries, arteries of the four limbs and SMA, and their peripheral vascular resistances. Here, the individual artery was simplified to a tapered tube and the resistances were modelled by a linear resistance. We numerically investigated contribution of the peripheral vascular resistance of SMA to the systemic blood distribution using this model. In addition to the upstream end of the model, which correlates with the left ventricle, two types of boundary condition were applied; mean left ventricular pressure which correlates with blood pressure (BP) and mean cardiac output which corresponds to cardiac index (CI). We examined it to reproduce the experimentally obtained hemodynamic change, in terms of the ratio of the aforementioned hemodynamic parameters from their initial values before the acupuncture, by regulating the peripheral vascular resistances and the upstream boundary condition. First, only the peripheral vascular resistance of SMA was changed to show contribution of the resistance to the change in blood flow volume in SMA, expecting reproduction of the experimentally obtained change. It was found, however, this was not enough to reproduce the experimental result. Then, we also changed the resistances of the other arteries together with the value given at upstream boundary. Here, the resistances of the other arteries were changed simultaneously in the same amount. Consequently, we successfully reproduced the hemodynamic change to find that regulation of the upstream boundary condition to the value experimentally obtained after the stimulation is necessary for the reproduction, though statistically significant changes in BP and CI were not observed in the experiment. It is generally known that sympathetic and parasympathetic tones take part in regulation of whole the systemic circulation including the cardiac function. The present result indicates that stimulation to ST-36 could induce vasodilation of peripheral circulation of SMA and vasoconstriction of that of other arteries. In addition, it implies that experimentally obtained small changes in BP and CI induced by the acupuncture may be involved in the therapeutic response.

Keywords: acupuncture, hemodynamics, lumped-parameter approximation, modeling, systemic vascular resistance

Procedia PDF Downloads 224
258 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms

Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic

Abstract:

Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.

Keywords: adsorption, diffusion, non-linear flow, shale gas production

Procedia PDF Downloads 165
257 Numerical Modeling and Experimental Analysis of a Pallet Isolation Device to Protect Selective Type Industrial Storage Racks

Authors: Marcelo Sanhueza Cartes, Nelson Maureira Carsalade

Abstract:

This research evaluates the effectiveness of a pallet isolation device for the protection of selective-type industrial storage racks. The device works only in the longitudinal direction of the aisle, and it is made up of a platform installed on the rack beams. At both ends, the platform is connected to the rack structure by means of a spring-damper system working in parallel. A system of wheels is arranged between the isolation platform and the rack beams in order to reduce friction, decoupling of the movement and improve the effectiveness of the device. The latter is evaluated by the reduction of the maximum dynamic responses of basal shear load and story drift in relation to those corresponding to the same rack with the traditional construction system. In the first stage, numerical simulations of industrial storage racks were carried out with and without the pallet isolation device. The numerical results allowed us to identify the archetypes in which it would be more appropriate to carry out experimental tests, thus limiting the number of trials. In the second stage, experimental tests were carried out on a shaking table to a select group of full-scale racks with and without the proposed device. The movement simulated by the shaking table was based on the Mw 8.8 magnitude earthquake of February 27, 2010, in Chile, registered at the San Pedro de la Paz station. The peak ground acceleration (PGA) was scaled in the frequency domain to fit its response spectrum with the design spectrum of NCh433. The experimental setup contemplates the installation of sensors to measure relative displacement and absolute acceleration. The movement of the shaking table with respect to the ground, the inter-story drift of the rack and the pallets with respect to the rack structure were recorded. Accelerometers redundantly measured all of the above in order to corroborate measurements and adequately capture low and high-frequency vibrations, whereas displacement and acceleration sensors are respectively more reliable. The numerical and experimental results allowed us to identify that the pallet isolation period is the variable with the greatest influence on the dynamic responses considered. It was also possible to identify that the proposed device significantly reduces both the basal cut and the maximum inter-story drift by up to one order of magnitude.

Keywords: pallet isolation system, industrial storage racks, basal shear load, interstory drift.

Procedia PDF Downloads 73
256 Impact Evaluation and Technical Efficiency in Ethiopia: Correcting for Selectivity Bias in Stochastic Frontier Analysis

Authors: Tefera Kebede Leyu

Abstract:

The purpose of this study was to estimate the impact of LIVES project participation on the level of technical efficiency of farm households in three regions of Ethiopia. We used household-level data gathered by IRLI between February and April 2014 for the year 2013(retroactive). Data on 1,905 (754 intervention and 1, 151 control groups) sample households were analyzed using STATA software package version 14. Efforts were made to combine stochastic frontier modeling with impact evaluation methodology using the Heckman (1979) two-stage model to deal with possible selectivity bias arising from unobservable characteristics in the stochastic frontier model. Results indicate that farmers in the two groups are not efficient and operate below their potential frontiers i.e., there is a potential to increase crop productivity through efficiency improvements in both groups. In addition, the empirical results revealed selection bias in both groups of farmers confirming the justification for the use of selection bias corrected stochastic frontier model. It was also found that intervention farmers achieved higher technical efficiency scores than the control group of farmers. Furthermore, the selectivity bias-corrected model showed a different technical efficiency score for the intervention farmers while it more or less remained the same for that of control group farmers. However, the control group of farmers shows a higher dispersion as measured by the coefficient of variation compared to the intervention counterparts. Among the explanatory variables, the study found that farmer’s age (proxy to farm experience), land certification, frequency of visit to improved seed center, farmer’s education and row planting are important contributing factors for participation decisions and hence technical efficiency of farmers in the study areas. We recommend that policies targeting the design of development intervention programs in the agricultural sector focus more on providing farmers with on-farm visits by extension workers, provision of credit services, establishment of farmers’ training centers and adoption of modern farm technologies. Finally, we recommend further research to deal with this kind of methodological framework using a panel data set to test whether technical efficiency starts to increase or decrease with the length of time that farmers participate in development programs.

Keywords: impact evaluation, efficiency analysis and selection bias, stochastic frontier model, Heckman-two step

Procedia PDF Downloads 75
255 Modeling and Implementation of a Hierarchical Safety Controller for Human Machine Collaboration

Authors: Damtew Samson Zerihun

Abstract:

This paper primarily describes the concept of a hierarchical safety control (HSC) in discrete manufacturing to up-hold productivity with human intervention and machine failures using a systematic approach, through increasing the system availability and using additional knowledge on machines so as to improve the human machine collaboration (HMC). It also highlights the implemented PLC safety algorithm, in applying this generic concept to a concrete pro-duction line using a lab demonstrator called FATIE (Factory Automation Test and Integration Environment). Furthermore, the paper describes a model and provide a systematic representation of human-machine collabora-tion in discrete manufacturing and to this end, the Hierarchical Safety Control concept is proposed. This offers a ge-neric description of human-machine collaboration based on Finite State Machines (FSM) that can be applied to vari-ous discrete manufacturing lines instead of using ad-hoc solutions for each line. With its reusability, flexibility, and extendibility, the Hierarchical Safety Control scheme allows upholding productivity while maintaining safety with reduced engineering effort compared to existing solutions. The approach to the solution begins with a successful partitioning of different zones around the Integrated Manufacturing System (IMS), which are defined by operator tasks and the risk assessment, used to describe the location of the human operator and thus to identify the related po-tential hazards and trigger the corresponding safety functions to mitigate it. This includes selective reduced speed zones and stop zones, and in addition with the hierarchical safety control scheme and advanced safety functions such as safe standstill and safe reduced speed are used to achieve the main goals in improving the safe Human Ma-chine Collaboration and increasing the productivity. In a sample scenarios, It is shown that an increase of productivity in the order of 2.5% is already possible with a hi-erarchical safety control, which consequently under a given assumptions, a total sum of 213 € could be saved for each intervention, compared to a protective stop reaction. Thereby the loss is reduced by 22.8%, if occasional haz-ard can be refined in a hierarchical way. Furthermore, production downtime due to temporary unavailability of safety devices can be avoided with safety failover that can save millions per year. Moreover, the paper highlights the proof of the development, implementation and application of the concept on the lab demonstrator (FATIE), where it is realized on the new safety PLCs, Drive Units, HMI as well as Safety devices in addition to the main components of the IMS.

Keywords: discrete automation, hierarchical safety controller, human machine collaboration, programmable logical controller

Procedia PDF Downloads 369
254 Understanding Neuronal and Glial Cell Behaviour in Multi-Layer Nanofibre Systems to Support the Development of an in vitro Model of Spinal Cord Injury and Personalised Prostheses for Repair

Authors: H. Pegram, R. Stevens, L. De Girolamo

Abstract:

Aligned electrospun nanofibres act as effective neuronal and glial cell scaffolds that can be layered to contain multiple sheets harboring different cell populations. This allows personalised biofunctional prostheses to be manufactured with both acellular and cellularised layers for the treatment of spinal cord injury. Additionally, the manufacturing route may be configured to produce in-vitro 3D cell based model of spinal cord injury to aid drug development and enhance prosthesis performance. The goal of this investigation was to optimise the multi-layer scaffold design parameters for prosthesis manufacture, to enable the development of multi-layer patient specific implant therapies. The work has also focused on the fabricating aligned nanofibre scaffolds that promote in-vitro neuronal and glial cell population growth, cell-to-cell interaction and long-term survival following trauma to mimic an in-vivo spinal cord lesion. The approach has established reproducible lesions and has identified markers of trauma and regeneration marked by effective neuronal migration across the lesion with glial support. The investigation has advanced the development of an in-vitro model of traumatic spinal cord injury and has identified a route to manufacture prostheses which target the repair spinal cord injury. Evidence collated to investigate the multi-layer concept suggests that physical cues provided by nanofibres provide both a natural extra-cellular matrix (ECM) like environment and controls cell proliferation and migration. Specifically, aligned nanofibre layers act as a guidance system for migrating and elongating neurons. On a larger scale, material type in multi-layer systems also has an influence in inter-layer migration as cell types favour different material types. Results have shown that layering nanofibre membranes create a multi-level scaffold system which can enhance or prohibit cell migration between layers. It is hypothesised that modifying nanofibre layer material permits control over neuronal/glial cell migration. Using this concept, layering of neuronal and glial cells has become possible, in the context of tissue engineering and also modelling in-vitro induced lesions.

Keywords: electrospinning, layering, lesion, modeling, nanofibre

Procedia PDF Downloads 138
253 Walking across the Government of Egypt: A Single Country Comparative Study of the Past and Current Condition of the Government of Egypt

Authors: Homyr L. Garcia, Jr., Anne Margaret A. Rendon, Carla Michaela B. Taguinod

Abstract:

Nothing is constant in this world but change. This is the reality wherein a lot of people fail to recognize and maybe, it is because of the fact that some see things that are happening with little value or no value at all until it’s gone. For the past years, Egypt was known for its stable government. It was able to withstand a lot of problems and crisis which challenged their country in ways which can never be imagined. In the present time, it seems like in just a snap of a finger, the said stability vanished and it was immediately replaced by a crisis which resulted to a failure in some parts of their government. In addition, this problem continued to worsen and the current situation of Egypt is just a reflection or a result of it. On the other hand, as the researchers continued to study the reasons why the government of Egypt is unstable, they concluded that there might be a possibility that they will be able to produce ways in which their country could be helped or improved. The instability of the government of Egypt is the product of combining all the problems which affects the lives of the people. Some of the reasons that the researchers found are the following: 1) unending doubts of the people regarding the ruling capacity of elected presidents, 2) removal of President Mohamed Morsi in position, 3) economic crisis, 4) a lot of protests and revolution happened, 5) resignation of the long term President Hosni Mubarak and 6) the office of the President is most likely available only to the chosen successor. Also, according to previous researches, there are two plausible scenarios for the instability of Egypt: 1) a military intervention specifically the Supreme Council of the Armed Forces or SCAF, resulting from a contested succession and 2) an Islamist push for political power which highlights the claim that religion is a hindrance towards the development of their country and government. From the eight possible reasons, the researchers decided that they will be focusing on economic crisis since the instability is more clearly seen in the country’s economy which directly affects the people and the government itself. In addition, they made a hypothesis which states that stable economy is a prerequisite towards a stable government. If they will be able to show how this claim is true by using the Social Autopsy Research Design for the qualitative method and Pearson’s correlation coefficient for the quantitative method, the researchers might be able to produce a proposal on how Egypt can stabilize their government and avoid such problems. Also, the hypothesis will be based from the Rational Action Theory which is a theory for understanding and modeling social and economy as well as individual behavior.

Keywords: Pearson’s correlation coefficient, rational action theory, social autopsy research design, supreme council of the armed forces (SCAF)

Procedia PDF Downloads 409
252 Integration of Technology into Nursing Education: A Collaboration between College of Nursing and University Research Center

Authors: Lori Lioce, Gary Maddux, Norven Goddard, Ishella Fogle, Bernard Schroer

Abstract:

This paper presents the integration of technologies into nursing education. The collaborative effort includes the College of Nursing (CoN) at the University of Alabama in Huntsville (UAH) and the UAH Systems Management and Production Center (SMAP). The faculty at the CoN conducts needs assessments to identify education and training requirements. A team of CoN faculty and SMAP engineers then prioritize these requirements and establish improvement/development teams. The development teams consist of nurses to evaluate the models and to provide feedback and of undergraduate engineering students and their senior staff mentors from SMAP. The SMAP engineering staff develops and creates the physical models using 3D printing, silicone molds and specialized molding mixtures and techniques. The collaboration has focused on developing teaching and training, or clinical, simulators. In addition, the onset of the Covid-19 pandemic has intensified this relationship, as 3D modeling shifted to supplied personal protection equipment (PPE) to local health care providers. A secondary collaboration has been introducing students to clinical benchmarking through the UAH Center for Management and Economic Research. As a result of these successful collaborations the Model Exchange & Development of Nursing & Engineering Technology (MEDNET) has been established. MEDNET seeks to extend and expand the linkage between engineering and nursing to K-12 schools, technical schools and medical facilities in the region to the resources available from the CoN and SMAP. As an example, stereolithography (STL) files of the 3D printed models, along with the specifications to fabricate models, are available on the MEDNET website. Ten 3D printed models have been developed and are currently in use by the CoN. The following additional training simulators are currently under development:1) suture pads, 2) gelatin wound models and 3) printed wound tattoos. Specification sheets have been written for these simulations that describe the use, fabrication procedures and parts list. These specifications are available for viewing and download on MEDNET. Included in this paper are 1) descriptions of CoN, SMAP and MEDNET, 2) collaborative process used in product improvement/development, 3) 3D printed models of training and teaching simulators, 4) training simulators under development with specification sheets, 5) family care practice benchmarking, 6) integrating the simulators into the nursing curriculum, 7) utilizing MEDNET as a pandemic response, and 8) conclusions and lessons learned.

Keywords: 3D printing, nursing education, simulation, trainers

Procedia PDF Downloads 122
251 Semi-Empirical Modeling of Heat Inactivation of Enterococci and Clostridia During the Hygienisation in Anaerobic Digestion Process

Authors: Jihane Saad, Thomas Lendormi, Caroline Le Marechal, Anne-marie Pourcher, Céline Druilhe, Jean-louis Lanoiselle

Abstract:

Agricultural anaerobic digestion consists in the conversion of animal slurry and manure into biogas and digestate. They need, however, to be treated at 70 ºC during 60 min before anaerobic digestion according to the European regulation (EC n°1069/2009 & EU n°142/2011). The impact of such heat treatment on the outcome of bacteria has been poorly studied up to now. Moreover, a recent study¹ has shown that enterococci and clostridia are still detected despite the application of such thermal treatment, questioning the relevance of this approach for the hygienisation of digestate. The aim of this study is to establish the heat inactivation kinetics of two species of enterococci (Enterococcus faecalis and Enterococcus faecium) and two species of clostridia (Clostridioides difficile and Clostridium novyi as a non-toxic model for Clostridium botulinum of group III). A pure culture of each strain was prepared in a specific sterile medium at concentration of 10⁴ – 10⁷ MPN / mL (Most Probable number), depending on the bacterial species. Bacterial suspensions were then filled in sterilized capillary tubes and placed in a water or oil bath at desired temperature for a specific period of time. Each bacterial suspension was enumerated using a MPN approach, and tests were repeated three times for each temperature/time couple. The inactivation kinetics of the four indicator bacteria is described using the Weibull model and the classical Bigelow model of first-order kinetics. The Weibull model takes biological variation, with respect to thermal inactivation, into account and is basically a statistical model of distribution of inactivation times as the classical first-order approach is a special case of the Weibull model. The heat treatment at 70 ºC / 60 min contributes to a reduction greater than 5 log10 for E. faecium and E. faecalis. However, it results only in a reduction of about 0.7 log10 for C. difficile and an increase of 0.5 log10 for C. novyi. Application of treatments at higher temperatures is required to reach a reduction greater or equal to 3 log10 for C. novyi (such as 30 min / 100 ºC, 13 min / 105 ºC, 3 min / 110 ºC, and 1 min / 115 ºC), raising the question of the relevance of the application of heat treatment at 70 ºC / 60 min for these spore-forming bacteria. To conclude, the heat treatment (70 ºC / 60 min) defined by the European regulation is sufficient to inactivate non-sporulating bacteria. Higher temperatures (> 100 ºC) are required as far as spore-forming bacteria concerns to reach a 3 log10 reduction (sporicidal activity).

Keywords: heat treatment, enterococci, clostridia, inactivation kinetics

Procedia PDF Downloads 113
250 Progressive Damage Analysis of Mechanically Connected Composites

Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan

Abstract:

While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values ​​, and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.

Keywords: puck, finite element, bolted joint, composite

Procedia PDF Downloads 102
249 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 74
248 Chromium (VI) Removal from Aqueous Solutions by Ion Exchange Processing Using Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071 Resins: Batch Ion Exchange Modeling

Authors: Havva Tutar Kahraman, Erol Pehlivan

Abstract:

In recent years, environmental pollution by wastewater rises very critically. Effluents discharged from various industries cause this challenge. Different type of pollutants such as organic compounds, oxyanions, and heavy metal ions create this threat for human bodies and all other living things. However, heavy metals are considered one of the main pollutant groups of wastewater. Therefore, this case creates a great need to apply and enhance the water treatment technologies. Among adopted treatment technologies, adsorption process is one of the methods, which is gaining more and more attention because of its easy operations, the simplicity of design and versatility. Ion exchange process is one of the preferred methods for removal of heavy metal ions from aqueous solutions. It has found widespread application in water remediation technologies, during the past several decades. Therefore, the purpose of this study is to the removal of hexavalent chromium, Cr(VI), from aqueous solutions. Cr(VI) is considered as a well-known highly toxic metal which modifies the DNA transcription process and causes important chromosomic aberrations. The treatment and removal of this heavy metal have received great attention to maintaining its allowed legal standards. The purpose of the present paper is an attempt to investigate some aspects of the use of three anion exchange resins: Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071. Batch adsorption experiments were carried out to evaluate the adsorption capacity of these three commercial resins in the removal of Cr(VI) from aqueous solutions. The chromium solutions used in the experiments were synthetic solutions. The parameters that affect the adsorption, solution pH, adsorbent concentration, contact time, and initial Cr(VI) concentration, were performed at room temperature. High adsorption rates of metal ions for the three resins were reported at the onset, and then plateau values were gradually reached within 60 min. The optimum pH for Cr(VI) adsorption was found as 3.0 for these three resins. The adsorption decreases with the increase in pH for three anion exchangers. The suitability of Freundlich, Langmuir and Scatchard models were investigated for Cr(VI)-resin equilibrium. Results, obtained in this study, demonstrate excellent comparability between three anion exchange resins indicating that Eichrom 1-X4 is more effective and showing highest adsorption capacity for the removal of Cr(VI) ions. Investigated anion exchange resins in this study can be used for the efficient removal of chromium from water and wastewater.

Keywords: adsorption, anion exchange resin, chromium, kinetics

Procedia PDF Downloads 260
247 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 254
246 Assessment of Environmental Risk Factors of Railway Using Integrated ANP-DEMATEL Approach in Fuzzy Conditions

Authors: Mehrdad Abkenari, Mehmet Kunt, Mahdi Nourollahi

Abstract:

Evaluating the environmental risk factors is a combination of analysis of transportation effects. Various definitions for risk can be found in different scientific sources. Each definition depends on a specific and particular perspective or dimension. The effects of potential risks present along the new proposed routes and existing infrastructures of large transportation projects like railways should be studied under comprehensive engineering frameworks. Despite various definitions provided for ‘risk’, all include a uniform concept. Two obvious aspects, loss and unreliability, have always been pointed in all definitions of this term. But, selection as the third aspect is usually implied and means how one notices it. Currently, conducting engineering studies on the environmental effects of railway projects have become obligatory according to the Environmental Assessment Act in developing countries. Considering the longitudinal nature of these projects and probable passage of railways through various ecosystems, scientific research on the environmental risk of these projects have become of great interest. Although many areas of expertise such as road construction in developing countries have not seriously committed to these studies yet, attention to these subjects in establishment or implementation of different systems have become an inseparable part of this wave of research. The present study used environmental risks identified and existing in previous studies and stations to use in next step. The second step proposes a new hybrid approach of analytical network process (ANP) and DEMATEL in fuzzy conditions for assessment of determined risks. Since evaluation of identified risks was not an easy touch, mesh structure was an appropriate approach for analyzing complex systems which were accordingly employed for problem description and modeling. Researchers faced the shortage of real space data and also due to the ambiguity of experts’ opinions and judgments, they were declared in language variables instead of numerical ones. Since fuzzy logic is appropriate for ambiguity and uncertainty, formulation of experts’ opinions in the form of fuzzy numbers seemed an appropriate approach. Fuzzy DEMATEL method was used to extract the relations between major and minor risk factors. Considering the internal relations of risk major factors and its sub-factors in the analysis of fuzzy network, the weight of risk’s main factors and sub-factors were determined. In general, findings of the present study, in which effective railway environmental risk indicators were theoretically identified and rated through the first usage of combined model of DEMATEL and fuzzy network analysis, indicate that environmental risks can be evaluated more accurately and also employed in railway projects.

Keywords: DEMATEL, ANP, fuzzy, risk

Procedia PDF Downloads 413
245 Strategies of Drug Discovery in Insects

Authors: Alaaeddeen M. Seufi

Abstract:

Many have been published on therapeutic derivatives from living organisms including insects. In addition to traditional maggot therapy, more than 900 therapeutic products were isolated from insects. Most people look at insects as enemies and others believe that insects are friends. Many beneficial insects rather than Honey Bees, Silk Worms and Shellac insect could insure human-insect friendship. In addition, insects could be MicroFactories, Biosensors or Bioreactors. InsectFarm is an amazing example of the applied research that transfers insects from laboratory to market by Prof Mircea Ciuhrii and co-workers. They worked for 18 years to derive therapeutics from insects. Their research resulted in production of more than 30 commercial medications derived from insects (e.g. Imunomax, Noblesse, etc.). Two general approaches were followed to discover drugs from living organisms. Some laboratories preferred biochemical approach to purify components of the innate immune system of insects and insect metabolites as well. Then the purified components could be tested for many therapeutic trials. Other researchers preferred molecular approach based on proteomic studies. Components of the innate immune system of insects were then tested for their medical activities. Our Laboratory team preferred to induce insect immune system (using oral, topical and injection routes of administration), then a transcriptomic study was done to discover the induced genes and to identify specific biomarkers that can help in drug discovery. Biomarkers play an important role in medicine and in drug discovery and development as well. Optimum biomarker development and application will require a team approach because of the multifaceted nature of biomarker selection, validation, and application. This team uses several techniques such as pharmacoepidemiology, pharmacogenomics, and functional proteomics; bioanalytical development and validation; modeling and simulation to improve and refine drug development. Our Achievements included the discovery of four components of the innate immune system of Spodoptera littoralis and Musca domestica. These components were designated as SpliDef (defesin), SpliLec (lectin), SpliCec (cecropin) and MdAtt (attacin). SpliDef, SpliLec and MdAtt were confirmed as antimicrobial peptides, while SpliCec was additionally confirmed as anticancer peptide. Our current research is going on to achieve something in antioxidants and anticoagulants from insects. Our perspective is to achieve something in the mass production of prototypes of our products and to reach it to the commercial level. These achievements are the integrated contributions of everybody in our team staff.

Keywords: AMPs, insect, innate immunitty, therappeutics

Procedia PDF Downloads 370
244 Vapour Liquid Equilibrium Measurement of CO₂ Absorption in Aqueous 2-Aminoethylpiperazine (AEP)

Authors: Anirban Dey, Sukanta Kumar Dash, Bishnupada Mandal

Abstract:

Carbondioxide (CO2) is a major greenhouse gas responsible for global warming and fossil fuel power plants are the main emitting sources. Therefore the capture of CO2 is essential to maintain the emission levels according to the standards. Carbon capture and storage (CCS) is considered as an important option for stabilization of atmospheric greenhouse gases and minimizing global warming effects. There are three approaches towards CCS: Pre combustion capture where carbon is removed from the fuel prior to combustion, Oxy-fuel combustion, where coal is combusted with oxygen instead of air and Post combustion capture where the fossil fuel is combusted to produce energy and CO2 is removed from the flue gases left after the combustion process. Post combustion technology offers some advantage as existing combustion technologies can still be used without adopting major changes on them. A number of separation processes could be utilized part of post –combustion capture technology. These include (a) Physical absorption (b) Chemical absorption (c) Membrane separation (d) Adsorption. Chemical absorption is one of the most extensively used technologies for large scale CO2 capture systems. The industrially important solvents used are primary amines like Monoethanolamine (MEA) and Diglycolamine (DGA), secondary amines like diethanolamine (DEA) and Diisopropanolamine (DIPA) and tertiary amines like methyldiethanolamine (MDEA) and Triethanolamine (TEA). Primary and secondary amines react fast and directly with CO2 to form stable carbamates while Tertiary amines do not react directly with CO2 as in aqueous solution they catalyzes the hydrolysis of CO2 to form a bicarbonate ion and a protonated amine. Concentrated Piperazine (PZ) has been proposed as a better solvent as well as activator for CO2 capture from flue gas with a 10 % energy benefit compared to conventional amines such as MEA. However, the application of concentrated PZ is limited due to its low solubility in water at low temperature and lean CO2 loading. So following the performance of PZ its derivative 2-Aminoethyl piperazine (AEP) which is a cyclic amine can be explored as an activator towards the absorption of CO2. Vapour liquid equilibrium (VLE) in CO2 capture systems is an important factor for the design of separation equipment and gas treating processes. For proper thermodynamic modeling accurate equilibrium data for the solvent system over a wide range of temperatures, pressure and composition is essential. The present work focuses on the determination of VLE data for (AEP + H2O) system at 40 °C for various composition range.

Keywords: absorption, aminoethyl piperazine, carbondioxide, vapour liquid equilibrium

Procedia PDF Downloads 267
243 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method

Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare

Abstract:

The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.

Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test

Procedia PDF Downloads 120
242 Measurement of in-situ Horizontal Root Tensile Strength of Herbaceous Vegetation for Improved Evaluation of Slope Stability in the Alps

Authors: Michael T. Lobmann, Camilla Wellstein, Stefan Zerbe

Abstract:

Vegetation plays an important role for the stabilization of slopes against erosion processes, such as shallow erosion and landslides. Plant roots reinforce the soil, increase soil cohesion and often cross possible shear planes. Hence, plant roots reduce the risk of slope failure. Generally, shrub and tree roots penetrate deeper into the soil vertically, while roots of forbs and grasses are concentrated horizontally in the topsoil and organic layer. Therefore, shrubs and trees have a higher potential for stabilization of slopes with deep soil layers than forbs and grasses. Consequently, research mainly focused on the vertical root effects of shrubs and trees. Nevertheless, a better understanding of the stabilizing effects of grasses and forbs is needed for better evaluation of the stability of natural and artificial slopes with herbaceous vegetation. Despite the importance of vertical root effects, field observations indicate that horizontal root effects also play an important role for slope stabilization. Not only forbs and grasses, but also some shrubs and trees form tight horizontal networks of fine and coarse roots and rhizomes in the topsoil. These root networks increase soil cohesion and horizontal tensile strength. Available methods for physical measurements, such as shear-box tests, pullout tests and singular root tensile strength measurement can only provide a detailed picture of vertical effects of roots on slope stabilization. However, the assessment of horizontal root effects is largely limited to computer modeling. Here, a method for measurement of in-situ cumulative horizontal root tensile strength is presented. A traction machine was developed that allows fixation of rectangular grass sods (max. 30x60cm) on the short ends with a 30x30cm measurement zone in the middle. On two alpine grass slopes in South Tyrol (northern Italy), 30x60cm grass sods were cut out (max. depth 20cm). Grass sods were pulled apart measuring the horizontal tensile strength over 30cm width over the time. The horizontal tensile strength of the sods was measured and compared for different soil depths, hydrological conditions, and root physiological properties. The results improve our understanding of horizontal root effects on slope stabilization and can be used for improved evaluation of grass slope stability.

Keywords: grassland, horizontal root effect, landslide, mountain, pasture, shallow erosion

Procedia PDF Downloads 166
241 Effect of Different Parameters of Converging-Diverging Vortex Finders on Cyclone Separator Performance

Authors: V. Kumar, K. Jha

Abstract:

The present study is done to explore design modifications of the vortex finder, as it has a significant effect on the cyclone separator performance. It is evident that modifications of the vortex finder improve the performance of the cyclone separator significantly. The study conducted strives to improve the overall performance of cyclone separators by utilizing a converging-diverging (CD) vortex finder instead of the traditional uniform diameter vortex finders. The velocity and pressure fields inside a Stairmand cyclone separator with body diameter 0.29m and vortex finder diameter 0.1305m are calculated. The commercial software, Ansys Fluent v14.0 is used to simulate the flow field in a uniform diameter cyclone and six cyclones modified with CD vortex finders. Reynolds stress model is used to simulate the effects of turbulence on the fluid and particulate phases, discrete phase model is used to calculate the particle trajectories. The performance of the modified vortex finders is compared with the traditional vortex finder. The effects of the lengths of the converging and diverging sections, the throat diameter and the end diameters of the convergent divergent section are also studied to achieve enhanced performance. The pressure and velocity fields inside the vortex finder are presented by means of contour plots and velocity vectors and changes in the flow pattern due to variation of the geometrical variables are also analysed. Results indicate that a convergent-divergent vortex finder is capable of decreasing the pressure drop than that achieved through a uniform diameter vortex finder. It is also observed that the end diameters of the CD vortex finder, the throat diameter and the length of the diverging part of the vortex finder have a significant impact on the cyclone separator performance. Increase in the lower diameter of the vortex finder by 66% results in 11.5% decrease in the dimensionless pressure drop (Euler number) with 5.8% decrease in separation efficiency. Whereas 50% decrease in the throat diameter gives 5.9% increase in the Euler number with 10.2% increase in the separation efficiency and increasing the length of the diverging part gives 10.28% increase in the Euler number with 5.74% increase in the separation efficiency. Increasing the upper diameter of the CD vortex finder is seen to produce an adverse effect on the performance as it increases the pressure drop significantly and decreases the separation efficiency. Increase in length of the converging is not seen to affect the performance significantly. From the present study, it is concluded that convergent-divergent vortex finders can be used in place of uniform diameter vortex finders to achieve a better cyclone separator performance.

Keywords: convergent-divergent vortex finder, cyclone separator, discrete phase modeling, Reynolds stress model

Procedia PDF Downloads 172
240 Water Quality in Buyuk Menderes Graben, Turkey

Authors: Tugbanur Ozen Balaban, Gultekin Tarcan, Unsal Gemici, Mumtaz Colak, I. Hakki Karamanderesi

Abstract:

Buyuk Menderes Graben is located in the Western Anatolia (Turkey). The graben has become the largest industrial and agricultural area with a total population exceeding 3.000.000. There are two big cities within the study areas from west to east as Aydın and Denizli. The study area is very rich with regard to cold ground waters and thermal waters. Electrical production using geothermal potential has become very popular in the last decades in this area. Buyuk Menderes Graben is a tectonically active extensional region and is undergoing a north–south extensional tectonic regime which commenced at the latest during Early Middle Miocene period. The basement of the study area consists of Menderes massif rocks that are made up of high-to low-grade metamorphics and they are aquifer for both cold ground waters and thermal waters depending on the location. Neogene terrestrial sediments, which are mainly composed by alluvium fan deposits unconformably cover the basement rocks in different facies have very low permeability and locally may act as cap rocks for the geothermal systems. The youngest unit is Quaternary alluvium which is the shallow regional aquifer consists of Holocene alluvial deposits in the study area. All the waters are of meteoric origin and reflect shallow or deep circulation according to the 8O, 2H and 3H contents. Meteoric waters move to deep zones by fractured system and rise to the surface along the faults. Water samples (drilling well, spring and surface waters) and local seawater were collected between 2010 and 2012 years. Geochemical modeling was calculated distribution of the aqueous species and exchange processes by using PHREEQCi speciation code. Geochemical analyses show that cold ground water types are evolving from Ca–Mg–HCO3 to Na–Cl–SO4 and geothermal aquifer waters reflect the water types of Na-Cl-HCO3 in Aydın. Water types of Denizli are Ca-Mg-HCO3 and Ca-Mg-HCO3-SO4. Thermal water types reflect generally Na-HCO3-SO4. The B versus Cl rates increase from east to west with the proportion of seawater introduced into the fresh water aquifers and geothermal reservoirs. Concentrations of some elements (As, B, Fe and Ni) are higher than the tolerance limit of the drinking water standard of Turkey (TS 266) and international drinking water standards (WHO, FAO etc).

Keywords: Buyuk Menderes, isotope chemistry, geochemical modelling, water quality

Procedia PDF Downloads 536
239 Concentrated Whey Protein Drink with Orange Flavor: Protein Modification and Formulation

Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh

Abstract:

The application of whey protein in drink industry to enhance the nutritional value of the products is important. Furthermore, the gelification of protein during thermal treatment and shelf life makes some limitations in its application. So, the main goal of this research is manufacturing of high concentrate whey protein orange drink with appropriate shelf life. In this way, whey protein was 5 to 30% hydrolyzed ( in 5 percent intervals at six stages), then thermal stability of samples with 10% concentration of protein was tested in acidic condition (T= 90 °C, pH=4.2, 5 minutes ) and neutral condition (T=120° C, pH:6.7, 20 minutes.) Furthermore, to study the shelf life of heat treated samples in 4 months at 4 and 24 °C, the time sweep rheological test were done. At neutral conditions, 5 to 20% hydrolyzed sample showed gelling during thermal treatment, whereas at acidic condition, was happened only in 5 to 10 percent hydrolyzed samples. This phenomenon could be related to the difference in hydrodynamic radius and zeta potential of samples with different level of hydrolyzation at acidic and neutral conditions. To study the gelification of heat resistant protein solutions during shelf life, for 4 months with 7 days intervals, the time sweep analysis were performed. Cross over was observed for all heat resistant neutral samples at both storage temperature, while in heat resistant acidic samples with degree of hydrolysis, 25 and 30 percentage at 4 and 20 °C, it was not seen. It could be concluded that the former sample was stable during heat treatment and 4 months storage, which made them a good choice for manufacturing high protein drinks. The Scheffe polynomial model and numerical optimization were employed for modeling and high protein orange drink formula optimization. Scheffe model significantly predicted the overal acceptance index (Pvalue<0.05) of sensorial analysis. The coefficient of determination (R2) of 0.94, the adjusted coefficient of determination (R2Adj) of 0.90, insignificance of the lack-of-fit test and F value of 64.21 showed the accuracy of the model. Moreover, the coefficient of variable (C.V) was 6.8% which suggested the replicability of the experimental data. The desirability function had been achieved to be 0.89, which indicates the high accuracy of optimization. The optimum formulation was found as following: Modified whey protein solution (65.30%), natural orange juice (33.50%), stevia sweetener (0.05%), orange peel oil (0.15%) and citric acid (1 %), respectively. Its worth mentioning that this study made an appropriate model for application of whey protein in drink industry without bitter flavor and gelification during heat treatment and shelf life.

Keywords: croos over, orange beverage, protein modification, optimization

Procedia PDF Downloads 62
238 Composing Method of Decision-Making Function for Construction Management Using Active 4D/5D/6D Objects

Authors: Hyeon-Seung Kim, Sang-Mi Park, Sun-Ju Han, Leen-Seok Kang

Abstract:

As BIM (Building Information Modeling) application continually expands, the visual simulation techniques used for facility design and construction process information are becoming increasingly advanced and diverse. For building structures, BIM application is design - oriented to utilize 3D objects for conflict management, whereas for civil engineering structures, the usability of nD object - oriented construction stage simulation is important in construction management. Simulations of 5D and 6D objects, for which cost and resources are linked along with process simulation in 4D objects, are commonly used, but they do not provide a decision - making function for process management problems that occur on site because they mostly focus on the visual representation of current status for process information. In this study, an nD CAD system is constructed that facilitates an optimized schedule simulation that minimizes process conflict, a construction duration reduction simulation according to execution progress status, optimized process plan simulation according to project cost change by year, and optimized resource simulation for field resource mobilization capability. Through this system, the usability of conventional simple simulation objects is expanded to the usability of active simulation objects with which decision - making is possible. Furthermore, to close the gap between field process situations and planned 4D process objects, a technique is developed to facilitate a comparative simulation through the coordinated synchronization of an actual video object acquired by an on - site web camera and VR concept 4D object. This synchronization and simulation technique can also be applied to smartphone video objects captured in the field in order to increase the usability of the 4D object. Because yearly project costs change frequently for civil engineering construction, an annual process plan should be recomposed appropriately according to project cost decreases/increases compared with the plan. In the 5D CAD system provided in this study, an active 5D object utilization concept is introduced to perform a simulation in an optimized process planning state by finding a process optimized for the changed project cost without changing the construction duration through a technique such as genetic algorithm. Furthermore, in resource management, an active 6D object utilization function is introduced that can analyze and simulate an optimized process plan within a possible scope of moving resources by considering those resources that can be moved under a given field condition, instead of using a simple resource change simulation by schedule. The introduction of an active BIM function is expected to increase the field utilization of conventional nD objects.

Keywords: 4D, 5D, 6D, active BIM

Procedia PDF Downloads 276