Search results for: software define networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8304

Search results for: software define networks

804 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.

Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves

Procedia PDF Downloads 83
803 The Correlation between the Anxiety of the Family Members of the Patients Referring to the Emergency Department and Their Views on the Communication Skills of Nurses

Authors: Mahnaz Seyedoshohadaee

Abstract:

Background and Aims: Hospitalization of one of the family members in the hospital, especially in the emergency department, causes anxiety and psychological problems in family members and others. The way nurses interact with patients and their companions can play an important role in controlling and managing their anxiety. This study aims to determine the relationship between the anxiety of family members of patients referring to emergency departments and their views on the communication skills of nurses. Materials and Methods: The current research was a descriptive-correlation cross-sectional study on 263 family members of patients referred to the department. The emergency of two selected medical training centers affiliated with Iran University of Medical Sciences was performed. The samples were selected continuously in 2018 based on the inclusion criteria. Information was collected using the Health Communication Questionnaire (HCCQ) and Beck Anxiety Questionnaire (BAI). To analyze the data, Pearson's correlation coefficient, independent t-tests, analysis of variance, and Kruskal-Wallis were used at a significance level of 0.05. The data was analyzed using SPSS version 16 statistical software. Results: The mean score of communication skills of emergency department nurses from the point of view of patients' companions was at a low level (74.36 with a standard deviation of 3.7). 3.75% of patients' companions had anxiety at a mild level. There was no statistically significant correlation between the anxieties of the patient's companions. The anxiety of the patient's companions had a statistically significant relationship with the educational level (P=0.039), economic status (P=0.033), and family relationship with the patient (P=0.001). Also, the average anxiety score in children was significantly higher than that of patients' wives (P=0.008). The triage level of the patient also had a statistically significant relationship with the anxiety of the patient's companions (P>0.001). Conclusion: Most of the family members of the patients referred to the emergency room experienced mild anxiety. Also, from their point of view, the communication skills of emergency nurses were at a weak level. Despite the fact that there was no statistically significant relationship between the patient's family member's anxiety and their opinion about nurses' communication skills in this study, it seems that the weak communication skills of nurses from the patient's family member's point of view need special attention. The results of the present study can provide the necessary grounds for planning to improve the communication skills of nurses and also control the anxiety of patient caregivers through in-service training or other incentive mechanisms.

Keywords: anxiety, family, emergency department, communication skills, nurse

Procedia PDF Downloads 53
802 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 121
801 Seroepidemiological Study of Toxoplasma gondii Infection in Women of Child-Bearing Age in Communities in Osun State, Nigeria

Authors: Olarinde Olaniran, Oluyomi A. Sowemimo

Abstract:

Toxoplasmosis is frequently misdiagnosed or underdiagnosed, and it is the third most common cause of hospitalization due to food-borne infection. Intra-uterine infection with Toxoplasma gondii due to active parasitaemia during pregnancy can cause severe and often fatal cerebral damage, abortion, and stillbirth of a fetus. The aim of the study was to investigate the prevalence of T. gondii infection in women of childbearing age in selected communities of Osun State with a view to determining the risk factors which predispose to the T. gondii infection. Five (5) ml of blood was collected by venopuncture into a plain blood collection tube by a medical laboratory scientist. Serum samples were separated by centrifuging the blood samples at 3000 rpm for 5 mins. The sera were collected with Eppendorf tubes and stored at -20°C analysis for the presence of IgG and IgM antibodies against T. gondii by commercially available enzyme-linked immunosorbent assay (ELISA) kit (Demeditec Diagnostics GmbH, Germany) conducted according to the manufacturer’s instructions. The optical densities of wells were measured by a photometer at a wavelength of 450 nm. Data collected were analysed using appropriate computer software. The overall seroprevalence of T. gondii among the women of child-bearing age in selected seven communities in Osun state was 76.3%. Out of 76.3% positive for Toxoplasma gondii infection, 70.0% were positive for anti- T. gondii IgG, and 32.3% were positive for IgM, and 26.7% for both IgG and IgM. The prevalence of T. gondii was lowest (58.9%) among women from Ile Ife, a peri-urban community, and highest (100%) in women residing in Alajue, a rural community. The prevalence of infection was significantly higher (P= 0.000) among Islamic women (87.5%) than in Christian women (70.8%). The highest prevalence (86.3%) was recorded in women with primary education, while the lowest (61.2%) was recorded in women with tertiary education (p =0.016). The highest prevalence (79.7%) was recorded in women that reside in rural areas, and the lowest (70.1%) was recorded in women that reside in peri-urban area (p=0.025). The prevalence of T. gondii infection was highest (81.4%) in women with one miscarriage, while the prevalence was lowest in women with no miscarriages (75.9%). The age of the women (p=0.042), Islamic religion (p=0.001), the residence of the women (p=0.001), and water source were all positively associated with T. gondii infection. The study concluded that there was a high seroprevalence of T. gondii recorded among women of child-bearing age in the study area. Hence, there is a need for health education and create awareness of the disease and its transmission to women of reproductive age group in general and pregnant women in particular to reduce the risk of T. gondii in pregnant women.

Keywords: seroepidemiology, Toxoplasma gondii, women, child-bearing, age, communities, Ile -Ife, Nigeria

Procedia PDF Downloads 174
800 Seismic Retrofit of Tall Building Structure with Viscous, Visco-Elastic, Visco-Plastic Damper

Authors: Nicolas Bae, Theodore L. Karavasilis

Abstract:

Increasingly, a large number of new and existing tall buildings are required to improve their resilient performance against strong winds and earthquakes to minimize direct, as well as indirect damages to society. Those advent stationary functions of tall building structures in metropolitan regions can be severely hazardous, in socio-economic terms, which also increase the requirement of advanced seismic performance. To achieve these progressive requirements, the seismic reinforcement for some old, conventional buildings have become enormously costly. The methods of increasing the buildings’ resilience against wind or earthquake loads have also become more advanced. Up to now, vibration control devices, such as the passive damper system, is still regarded as an effective and an easy-to-install option, in improving the seismic resilience of buildings at affordable prices. The main purpose of this paper is to examine 1) the optimization of the shape of visco plastic brace damper (VPBD) system which is one of hybrid damper system so that it can maximize its energy dissipation capacity in tall buildings against wind and earthquake. 2) the verification of the seismic performance of the visco plastic brace damper system in tall buildings; up to forty-storey high steel frame buildings, by comparing the results of Non-Linear Response History Analysis (NLRHA), with and without a damper system. The most significant contribution of this research is to introduce the optimized hybrid damper system that is adequate for high rise buildings. The efficiency of this visco plastic brace damper system and the advantages of its use in tall buildings can be verified since tall buildings tend to be affected by wind load at its normal state and also by earthquake load after yielding of steel plates. The modeling of the prototype tall building will be conducted using the Opensees software. Three types of modeling were used to verify the performance of the damper (MRF, MRF with visco-elastic, MRF with visco-plastic model) 22-set seismic records used and the scaling procedure was followed according to the FEMA code. It is shown that MRF with viscous, visco-elastic damper, it is superior effective to reduce inelastic deformation such as roof displacement, maximum story drift, roof velocity compared to the MRF only.

Keywords: tall steel building, seismic retrofit, viscous, viscoelastic damper, performance based design, resilience based design

Procedia PDF Downloads 187
799 Numerical Evaluation of Lateral Bearing Capacity of Piles in Cement-Treated Soils

Authors: Reza Ziaie Moayed, Saeideh Mohammadi

Abstract:

Soft soil is used in many of civil engineering projects like coastal, marine and road projects. Because of low shear strength and stiffness of soft soils, large settlement and low bearing capacity will occur under superstructure loads. This will make the civil engineering activities more difficult and costlier. In the case of soft soils, improvement is a suitable method to increase the shear strength and stiffness for engineering purposes. In recent years, the artificial cementation of soil by cement and lime has been extensively used for soft soil improvement. Cement stabilization is a well-established technique for improving soft soils. Artificial cementation increases the shear strength and hardness of the natural soils. On the other hand, in soft soils, the use of piles to transfer loads to the depths of ground is usual. By using cement treated soil around the piles, high bearing capacity and low settlement in piles can be achieved. In the present study, lateral bearing capacity of short piles in cemented soils is investigated by numerical approach. For this purpose, three dimensional (3D) finite difference software, FLAC 3D is used. Cement treated soil has a strain hardening-softening behavior, because of breaking of bonds between cement agent and soil particle. To simulate such behavior, strain hardening-softening soil constitutive model is used for cement treated soft soil. Additionally, conventional elastic-plastic Mohr Coulomb constitutive model and linear elastic model are used for stress-strain behavior of natural soils and pile. To determine the parameters of constitutive models and also for verification of numerical model, the results of available triaxial laboratory tests on and insitu loading of piles in cement treated soft soil are used. Different parameters are considered in parametric study to determine the effective parameters on the bearing of the piles on cemented treated soils. In the present paper, the effect of various length and height of the artificial cemented area, different diameter and length of the pile and the properties of the materials are studied. Also, the effect of choosing a constitutive model for cemented treated soils in the bearing capacity of the pile is investigated.

Keywords: bearing capacity, cement-treated soils, FLAC 3D, pile

Procedia PDF Downloads 123
798 Effect of Seasons and Storage Methods on Seed Quality of Slender Leaf (Crotalaria Sp.) in Western Kenya

Authors: Faith Maina

Abstract:

Slender leaf (Crotalaria brevidens and Crotalaria ochroleuca), African indigenous vegetables, are an important source of nutrients, income and traditional medicines in Kenya. However, their production is constrained by poor quality seed, due to lack of standardized agronomic and storage practices. Factors that affect the quality of seed in storage include the duration of storage, seed moisture, temperature, relative humidity, oxygen pressure during storage, diseases, and pests. These factors vary with the type of storage method used. The aim of the study was to investigate the effect of various storage methods on seed quality of slender leaf and recommend the best methods of seed storage to the farmers in Western Kenya. Seeds from various morphotypes of slender leaf that had high germination percentage (90%) were stored in pots, jars, brown paper bags and polythene bags in Kakamega and Siaya. Other seeds were also stored in a freezer at the University of Eldoret. In Kakamega County average room temperature was 23°C and relative humidity was 85% during the storage period of May to July 2006. Between December and February 2006 the average room temperature was 26°C while relative humidity was 80% in the same county. In Siaya County, the average room temperature was 25°C and relative humidity was 80% during storage period of May to July 2006. In the same county, the average temperature was 28°C and relative humidity 65% during the period of December and February 2006. Storage duration was 90 days for each season. Seed viability and vigour, was determined for each storage method. Data obtained from storage experiments was subjected to ANOVA and T-tests using Statistical Analysis Software (SAS). Season of growth and storage methods significantly influenced seed quality in Kakamega and Siaya counties. Seeds from the long rains season had higher seed quality than those grown during the short rains season. Generally, seeds stored in pots, brown paper bags, jars and freezer had higher seed quality than those stored in polythene bags. It was concluded that in order to obtain high-quality seeds farmers should store slender leaf seeds in pots or brown paper bags or plastic jars or freezer.

Keywords: Crotalaria sp, seed, quality, storage

Procedia PDF Downloads 198
797 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 118
796 Searching SNPs Variants in Myod-1 and Myod-2 Genes Linked to Body Weight in Gilthead Seabream, Sparus aurata L.

Authors: G. Blanco-Lizana, C. García-Fernández, J. A. Sánchez

Abstract:

Growth is a productive trait regulated by a large and complex gene network with very different effect. Some of they (candidate genes) have a higher effect and are excellent resources to search in them polymorphisms correlated with differences in growth rates. This study was focused on the identification of single nucleotide polymorphism (SNP) in MyoD-1 and MyoD-2 genes, members of the family of myogenic regulatory genes with a key role in the differentiation and development of muscular tissue.(MFRs), and its evaluation as potential markers in genetic selection programs for growth in gilthead sea bream (Sparus aurata). Through a sequencing in 30 seabream (classified as unrelated by microsatellite markers) of 1.968bp in MyoD-1 gene [AF478568 .1] and 1.963bp in MyoD-2 gene [AF478569.1], three SNPs were identified in each gene (SaMyoD-1 D2100A (D indicate a deletion) SaMyoD-1 A2143G and SaMyoD-1 A2404G and SaMyoD-2_A785C, SaMyoD-2_C1982T and SaMyoD-2_A2031T). The relationships between SNPs and body weight were evaluated by SNP genotyping of 53 breeders from two broodstocks (A:18♀-9♂; B:16♀-10♂) and 389 offspring divided into two groups (slow- and fast-growth) with significant differences in growth at 18 months of development (A18Slow: N=107, A18Fast: N=103, B18Slow: N=92 and B18Fast: N=87) (Borrell et al., 2011). Haplotype and diplotype were reconstructed from genotype data by Phase 2.1 software. Differences among means of different diplotypes were calculated by one-way ANOVA followed by post-hoc Tukey test. Association analysis indicated that single SNP did not show significant effect on body weight. However, when the analysis is carried out considering haplotype data it was observed that the DGG haplotipe of MyoD-1 gen and CCA haplotipe of MyoD- 2gen were associated to with lower body weight. This haplotype combination always showed the lowest mean body weight (P<0.05) in three (A18Slow, A18Fast & B18Slow) of the four groups tested. Individuals with DGG haplotipe of MyoD-1 gen have a 25,5% and those with CCA haplotipe of MyoD- 2gen showed 14-18% less on mean body weight. Although further studies are need to validate the role of these 3 SNPs as marker for body weight, the polymorphism-trait association established in this work create promising expectations on the use of these variants as genetic tool for future giltead seabream breeding programs.

Keywords: growth, MyoD-1 and MyoD-2 genes, selective breeding, SNP-haplotype

Procedia PDF Downloads 326
795 In vitro and in vivo Anticancer Activity of Nanosize Zinc Oxide Composites of Doxorubicin

Authors: Emma R. Arakelova, Stepan G. Grigoryan, Flora G. Arsenyan, Nelli S. Babayan, Ruzanna M. Grigoryan, Natalia K. Sarkisyan

Abstract:

Novel nanosize zinc oxide composites of doxorubicin obtained by deposition of 180 nm thick zinc oxide film on the drug surface using DC-magnetron sputtering of a zinc target in the form of gels (PEO+Dox+ZnO and Starch+NaCMC+Dox+ZnO) were studied for drug delivery applications. The cancer specificity was revealed both in in vitro and in vivo models. The cytotoxicity of the test compounds was analyzed against human cancer (HeLa) and normal (MRC5) cell lines using MTT colorimetric cell viability assay. IC50 values were determined and compared to reveal the cancer specificity of the test samples. The mechanistic study of the most active compound was investigated using Flow cytometry analyzing of the DNA content after PI (propidium iodide) staining. Data were analyzed with Tree Star FlowJo software using cell cycle analysis Dean-Jett-Fox module. The in vivo anticancer activity estimation experiments were carried out on mice with inoculated ascitic Ehrlich’s carcinoma at intraperitoneal introduction of doxorubicin and its zinc oxide compositions. It was shown that the nanosize zinc oxide film deposition on the drug surface leads to the selective anticancer activity of composites at the cellular level with the range of selectivity index (SI) from 4 (Starch+NaCMC+Dox+ZnO) to 200 (PEO(gel)+Dox+ZnO) which is higher than that of free Dox (SI = 56). The significant increase in vivo antitumor activity (by a factor of 2-2.5) and decrease of general toxicity of zinc oxide compositions of doxorubicin in the form of the above mentioned gels compared to free doxorubicin were shown on the model of inoculated Ehrlich's ascitic carcinoma. Mechanistic studies of anticancer activity revealed the cytostatic effect based on the high level of DNA biosynthesis inhibition at considerable low concentrations of zinc oxide compositions of doxorubicin. The results of studies in vitro and in vivo behavior of PEO+Dox+ZnO and Starch+NaCMC+Dox+ZnO composites confirm the high potential of the nanosize zinc oxide composites as a vector delivery system for future application in cancer chemotherapy.

Keywords: anticancer activity, cancer specificity, doxorubicin, zinc oxide

Procedia PDF Downloads 407
794 Financial Analysis of the Foreign Direct in Mexico

Authors: Juan Peña Aguilar, Lilia Villasana, Rodrigo Valencia, Alberto Pastrana, Martin Vivanco, Juan Peña C

Abstract:

Each year a growing number of companies entering Mexico in search of the domestic market share. These activities, including stores, telephone long distance and local raw materials and energy, and particularly the financial sector, have managed to significantly increase its weight in the flows of FDI in Mexico , however, you should consider whether these trends FDI are positive for the Mexican economy and these activities increase Mexican exports in the medium term , and its share in GDP , gross fixed capital formation and employment. In general stresses that these activities, by far, have been unable to significantly generate linkages with the rest of the economy, a process that has not favored with competitiveness policies and activities aimed at these neutral or horizontal. Since the nineties foreign direct investment (FDI) has shown a remarkable dynamism, both internationally and in Latin America and in Mexico. Only in Mexico the first recipient of FDI in importance in Latin America during 1990-1995 and was displaced by Brazil since FDI increased from levels below 1 % of GDP during the eighties to around 3 % of GDP during the nineties. Its impact has been significant not only from a macroeconomic perspective , it has also allowed the generation of a new industrial production structure and organization, parallel to a significant modernization of a segment of the economy. The case of Mexico also is particularly interesting and relevant because the destination of FDI until 1993 had focused on the purchase of state assets during privatization process. This paper aims to present FDI flows in Mexico and analyze the different business strategies that have been touched and encouraged by the FDI. On the one hand, looking briefly discuss regulatory issues and source and recipient of FDI sectors. Furthermore, the paper presents in more detail the impacts and changes that generated the FDI contribution of FDI in the Mexican economy , besides the macroeconomic context and later legislative changes that resulted in the current regulations is examined around FDI in Mexico, including aspects of the Free Trade Agreement (NAFTA). It is worth noting that foreign investment can not only be considered from the perspective of the receiving economic units. Instead, these flows also reflect the strategic interests of transnational corporations (TNCs) and other companies seeking access to markets and increased competitiveness of their production networks and global distribution, among other reasons. Similarly it is important to note that foreign investment in its various forms is critically dependent on historical and temporal aspects. Thus, the same functionality can vary significantly depending on the specific characteristics of both receptor units as sources of FDI, including macroeconomic, institutional, industrial organization, and social aspects, among others.

Keywords: foreign direct investment (FDI), competitiveness, neoliberal regime, globalization, gross domestic product (GDP), NAFTA, macroeconomic

Procedia PDF Downloads 446
793 Using Corpora in Semantic Studies of English Adjectives

Authors: Oxana Lukoshus

Abstract:

The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.

Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies

Procedia PDF Downloads 311
792 The Investigation of Work Stress and Burnout in Nurse Anesthetists: A Cross-Sectional Study

Authors: Yen Ling Liu, Shu-Fen Wu, Chen-Fuh Lam, I-Ling Tsai, Chia-Yu Chen

Abstract:

Purpose: Nurse anesthetists are confronting extraordinarily high job stress in their daily practice, deriving from the fast-track anesthesia care, risk of perioperative complications, routine rotating shifts, teaching programs and interactions with the surgical team in the operating room. This study investigated the influence of work stress on the burnout and turnover intention of nurse anesthetists in a regional general hospital in Southern Taiwan. Methods: This was a descriptive correlational study carried out in 66 full-time nurse anesthetists. Data was collected from March 2017 to June 2017 by in-person interview, and a self-administered structured questionnaire was completed by the interviewee. Outcome measurements included the Practice Environment Scale of the Nursing Work Index (PES-NWI), Maslach Burnout Inventory (MBI) and nursing staff turnover intention. Numerical data were analyzed by descriptive statistics, independent t test, or one-way ANOVA. Categorical data were compared using the chi-square test (x²). Datasets were computed with Pearson product-moment correlation and linear regression. Data were analyzed by using SPSS 20.0 software. Results: The average score for job burnout was 68.7916.67 (out of 100). The three major components of burnout, including emotional depletion (mean score of 26.32), depersonalization (mean score of 13.65), and personal(mean score of 24.48). These average scores suggested that these nurse anesthetists were at high risk of burnout and inversely correlated with turnover intention (t = -4.048, P < 0.05). Using linear regression model, emotional exhaustion and depersonalization were the two independent factors that predicted turnover intention in the nurse anesthetists (19.1% in total variance). Conclusion/Implications for Practice: The study identifies that the high risk of job burnout in the nurse anesthetists is not simply derived from physical overload, but most likely resulted from the additional emotional and psychological stress. The occurrence of job burnout may affect the quality of nursing work, and also influence family harmony, in turn, may increase the turnover rate. Multimodal approach is warranted to reduce work stress and job burnout in nurse anesthetists to enhance their willingness to contribute in anesthesia care.

Keywords: anesthesia nurses, burnout, job, turnover intention

Procedia PDF Downloads 291
791 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 205
790 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 163
789 Contentious Politics during a Period of Transition to Democracy from an Authoritarian Regime: The Spanish Cycle of Protest of November 1975-December 1978

Authors: Juan Sanmartín Bastida

Abstract:

When a country experiences a period of transition from authoritarianism to democracy, involving an earlier process of political liberalization and a later process of democratization, a cycle of protest usually outbreaks, as there is a reciprocal influence between that kind of political change and the frequency and scale of social protest events. That is what happened in Spain during the first years of its transition to democracy from the Francoist authoritarian regime, roughly between November 1975 and December 1978. Thus, the object of this study is to show and explain how that cycle of protest started, developed, and finished in relation to such a political change, and offer specific information about the main features of all protest cycles: the social movements that arose during that period, the number of protest events by month, the forms of collective action that were utilized, the groups of challengers that engaged in contentious politics, the reaction of the authorities to the action and claims of those groups, etc. The study of this cycle of protest, using the primary sources and analytical tools that characterize the model of research of protest cycles, will make a contribution to the field of contentious politics and its phenomenon of cycles of contention, and more broadly to the political and social history of contemporary Spain. The cycle of protest and the process of political liberalization of the authoritarian regime began around the same time, but the first concluded long before the process of democratization was completed in 1982. The ascending phase of the cycle and therefore the process of liberalization started with the death of Francisco Franco and the proclamation of Juan Carlos I as King of Spain in November 1975; the peak of the cycle was around the first months of 1977; the descending phase started after the first general election of June 1977; and the level of protest stabilized in the last months of 1978, a year that finished with a referendum in which the Spanish people approved the current democratic constitution. It was then when we can consider that the cycle of protest came to an end. The primary sources are the news of protest events and social movements in the three main Spanish newspapers at the time, other written or audiovisual documents, and in-depth interviews; and the analytical tools are the political opportunities that encourage social protest, the available repertoire of contention, the organizations and networks that brought together people with the same claims and allowed them to engage in contentious politics, and the interpretative frames that justify, dignify and motivates their collective action. These are the main four factors that explain the beginning, development and ending of the cycle of protest, and therefore the accompanying social movements and events of collective action. Among those four factors, the political opportunities -their opening, exploitation, and closure-proved to be most decisive.

Keywords: contentious politics, cycles of protest, political opportunities, social movements, Spanish transition to democracy

Procedia PDF Downloads 135
788 Development of Optimized Eye Mascara Packages with Bioinspired Spiral Methodology

Authors: Daniela Brioschi, Rovilson Mafalda, Silvia Titotto

Abstract:

In the present days, packages are considered a fundamental element in the commercialization of products and services. A good package is capable of helping to attract new customers and also increasing a product’s purchase intent. In this scenario, packaging design emerges as an important tool, since products and design of their packaging are so interconnected that they are no longer seen as separate elements. Packaging design is, in fact, capable of generating desire for a product. The packaging market for cosmetics, especially makeup market, has also been experiencing an increasing level of sophistication and requirements. Considering packaging represents an important link of communication with the final user and plays a significant role on the sales process, it is of great importance that packages accomplish not only with functional requirements but also with the visual appeal. One of the possibilities for the design of packages and, in this context, packages for make-up, is the bioinspired design – or biomimicry. The bio-inspired design presents a promising paradigm for innovation in both design and sustainable design, by using biological system analogies to develop solutions. It has gained importance as a widely diffused movement in design for environmentally conscious development and is also responsible for several useful and innovative designs. As eye mascara packages are also part of the constant evolution on the design for cosmetics area and the traditional packages present the disadvantage of product drying along time, this project aims to develop a new and innovative package for this product, by using a selected bioinspired design methodology during the development process and also suitable computational tools. In order to guide the development process of the package, it was chosen the spiral methodology, conceived by The Biomimicry Institut, which consists of a reliable tool, since it was based on traditional design methodologies. The spiral design comprises identification, translation, discovery, abstraction, emulation and evaluation steps, that can work iteratively as the process develops as a spiral. As support tool for packaging, 3D modelling is being used by the software Inventor Autodesk Inventor 2018. Although this is an ongoing research, first results showed that spiral methodology design, together with Autodesk Inventor, consist of suitable instruments for the bio-inspired design process, and also nature proved itself to be an amazing and inexhaustible source of inspiration.

Keywords: bio-inspired design, design methodology, packaging, cosmetics

Procedia PDF Downloads 185
787 Application of the Building Information Modeling Planning Approach to the Factory Planning

Authors: Peggy Näser

Abstract:

Factory planning is a systematic, objective-oriented process for planning a factory, structured into a sequence of phases, each of which is dependent on the preceding phase and makes use of particular methods and tools, and extending from the setting of objectives to the start of production. The digital factory, on the other hand, is the generic term for a comprehensive network of digital models, methods, and tools – including simulation and 3D visualisation – integrated by a continuous data management system. Its aim is the holistic planning, evaluation and ongoing improvement of all the main structures, processes and resources of the real factory in conjunction with the product. Digital factory planning has already become established in factory planning. The application of Building Information Modeling has not yet been established in factory planning but has been used predominantly in the planning of public buildings. Furthermore, this concept is limited to the planning of the buildings and does not include the planning of equipment of the factory (machines, technical equipment) and their interfaces to the building. BIM is a cooperative method of working, in which the information and data relevant to its lifecycle are consistently recorded, managed and exchanged in a transparent communication between the involved parties on the basis of digital models of a building. Both approaches, the planning approach of Building Information Modeling and the methodical approach of the Digital Factory, are based on the use of a comprehensive data model. Therefore it is necessary to examine how the approach of Building Information Modeling can be extended in the context of factory planning in such a way that an integration of the equipment planning, as well as the building planning, can take place in a common digital model. For this, a number of different perspectives have to be investigated: the equipment perspective including the tools used to implement a comprehensive digital planning process, the communication perspective between the planners of different fields, the legal perspective, that the legal certainty in each country and the quality perspective, on which the quality criteria are defined and the planning will be evaluated. The individual perspectives are examined and illustrated in the article. An approach model for the integration of factory planning into the BIM approach, in particular for the integrated planning of equipment and buildings and the continuous digital planning is developed. For this purpose, the individual factory planning phases are detailed in the sense of the integration of the BIM approach. A comprehensive software concept is shown on the tool. In addition, the prerequisites required for this integrated planning are presented. With the help of the newly developed approach, a better coordination between equipment and buildings is to be achieved, the continuity of the digital factory planning is improved, the data quality is improved and expensive implementation errors are avoided in the implementation.

Keywords: building information modeling, digital factory, digital planning, factory planning

Procedia PDF Downloads 261
786 The Role of the Corporate Social Responsibility in Poverty Reduction

Authors: M. Verde, G. Falzarano

Abstract:

The paper examines the connection between corporate social responsibility (CSR), capability approach and poverty reduction; in particular, the local employment development (LED) by way of CSR initiatives. The joint action of LED/CSR results in a win-win situation, not only for the enterprises but also for all the stakeholders involved; in this regard, subsidiarity and coordination between national and regional/local authorities are central to a socially-oriented market economy. In the first section, the CSR is analysed on the basis of its social function in the fight against poverty, as a 'capabilities deprivation'. In the central part, the attention is focused on the relationship between CSR and LED; ergo, on the role of the enterprises in fostering capabilities development (the employment). Besides, all the potential solutions are presented, stressing the possible combinations, in the last part. The benchmark is the enterprise as an economic and a social institution: the business should not be combined with profit merely, paying more attention to its sustainable impact and social contribution. In which way could it be possible? The answer is the CSR. The impact of CSR on poverty reduction is still little explored. The companies help to reduce poverty through economic contribution, human rights and social inclusion; hence, the business becomes an 'agent of development' in order to fight against 'inequality'. The starting point is the pyramid of social responsibility, where ethic and philanthropic responsibilities involve programmes and actions aimed at personal development of the individuals, improving human standard of living in all forms, including poverty, when people do not have a choice between different 'life options', ranging from level of education to employment. At this point, CSR comes into play and works on two dimensions: poverty reduction and poverty prevention, by means of a series of initiatives: first of all, job creation and precarious work reduction. Empowerment of the local actors, financial support and combination of top down and bottom up initiatives are some of CSR areas of activity. Several positive effects occur on individual levels of educations, access to capital, individual health status, empowerment of youth and woman, access to social networks and it was observed that these effects depend on the type of CSR strategy. Indeed, CSR programmes should take into account fundamental criteria, such as the transparency, the information about benefits, a coordination unit among institutions and more clear guidelines. In this way, the advantages to the corporate reputation and to the community translate into a better job matching on the labour market, inter alia. It is important to underline that the success depends on the specific measures of the areas in question, by adapting them to the local needs, in light of general principles and index; therefore, the concrete commitment of the all stakeholders involved is decisive in order to achieve the goals. The enterprise would represent a concrete contribution for the pursuit of sustainable development and for the dissemination of a social and well being awareness.

Keywords: capability approach, local employment development, poverty, social inclusion

Procedia PDF Downloads 133
785 Mathematical Modeling for Continuous Reactive Extrusion of Poly Lactic Acid Formation by Ring Opening Polymerization Considering Metal/Organic Catalyst and Alternative Energies

Authors: Satya P. Dubey, Hrushikesh A Abhyankar, Veronica Marchante, James L. Brighton, Björn Bergmann

Abstract:

Aims: To develop a mathematical model that simulates the ROP of PLA taking into account the effect of alternative energy to be implemented in a continuous reactive extrusion production process of PLA. Introduction: The production of large amount of waste is one of the major challenges at the present time, and polymers represent 70% of global waste. PLA has emerged as a promising polymer as it is compostable, biodegradable thermoplastic polymer made from renewable sources. However, the main limitation for the application of PLA is the traces of toxic metal catalyst in the final product. Thus, a safe and efficient production process needs to be developed to avoid the potential hazards and toxicity. It has been found that alternative energy sources (LASER, ultrasounds, microwaves) could be a prominent option to facilitate the ROP of PLA via continuous reactive extrusion. This process may result in complete extraction of the metal catalysts and facilitate less active organic catalysts. Methodology: Initial investigation were performed using the data available in literature for the reaction mechanism of ROP of PLA based on conventional metal catalyst stannous octoate. A mathematical model has been developed by considering significant parameters such as different initial concentration ratio of catalyst, co-catalyst and impurity. Effects of temperature variation and alternative energies have been implemented in the model. Results: The validation of the mathematical model has been made by using data from literature as well as actual experiments. Validation of the model including alternative energies is in progress based on experimental data for partners of the InnoREX project consortium. Conclusion: The model developed reproduces accurately the polymerisation reaction when applying alternative energy. Alternative energies have a great positive effect to increase the conversion and molecular weight of the PLA. This model could be very useful tool to complement Ludovic® software to predict the large scale production process when using reactive extrusion.

Keywords: polymer, poly-lactic acid (PLA), ring opening polymerization (ROP), metal-catalyst, bio-degradable, renewable source, alternative energy (AE)

Procedia PDF Downloads 358
784 Embodied Communication - Examining Multimodal Actions in a Digital Primary School Project

Authors: Anne Öman

Abstract:

Today in Sweden and in other countries, a variety of digital artefacts, such as laptops, tablets, interactive whiteboards, are being used at all school levels. From an educational perspective, digital artefacts challenge traditional teaching because they provide a range of modes for expression and communication and are not limited to the traditional medium of paper. Digital technologies offer new opportunities for representations and physical interactions with objects, which put forward the role of the body in interaction and learning. From a multimodal perspective the emphasis is on the use of multiple semiotic resources for meaning- making and the study presented here has examined the differential use of semiotic resources by pupils interacting in a digitally designed task in a primary school context. The instances analyzed in this paper come from a case study where the learning task was to create an advertising film in a film-software. The study in focus involves the analysis of a single case with the emphasis on the examination of the classroom setting. The research design used in this paper was based on a micro ethnographic perspective and the empirical material was collected through video recordings of small-group work in order to explore pupils’ communication within the group activity. The designed task described here allowed students to build, share, collaborate upon and publish the redesigned products. The analysis illustrates the variety of communicative modes such as body position, gestures, visualizations, speech and the interaction between these modes and the representations made by the pupils. The findings pointed out the importance of embodied communication during the small- group processes from a learning perspective as well as a pedagogical understanding of pupils’ representations, which were similar from a cultural literacy perspective. These findings open up for discussions with further implications for the school practice concerning the small- group processes as well as the redesigned products. Wider, the findings could point out how multimodal interactions shape the learning experience in the meaning-making processes taking into account that language in a globalized society is more than reading and writing skills.

Keywords: communicative learning, interactive learning environments, pedagogical issues, primary school education

Procedia PDF Downloads 406
783 Cultural Identity and Self-Censorship in Social Media: A Qualitative Case Study

Authors: Nastaran Khoshsabk

Abstract:

The evolution of communication through the Internet has influenced shaping and reshaping the self-presentation of social media users. Online communities both connect people and give voice to the voiceless allowing them to present themselves nationally and globally. People all around the world are experiencing censorship in different aspects of their life. Censorship can be externally imposed because of the political situations, or it can be self-imposed. Social media users choose the content they want to share and decide about the online audiences with whom they want to share this content. Most social media networks, such as Facebook, enable their users to be selective about the shared content and its availability to other people. However, sometimes instead of targeting a specific audience, users self-censor themselves or decide not to share various forms of information. These decisions are of particular importance in countries such as Iran where Internet is not the arena of free self-presentation and people are encouraged to stay away from political participation in the country and acting against the Islamic values. Facebook and some other social media tools are blocked in countries such as Iran. This project investigates the importance of social media in the life of Iranians to explore how they present themselves and construct their digital selves. The notion of cultural identity is applied in this research to explore the educational and informative role of social media in the identity formation and cultural representation of Facebook users. This study explores the self-censorship of Iranian adult Facebook users through their online self-representation and communication on the Internet. The data in this qualitative multiple case study have been collected through individual synchronous online interviews with the researcher’s Facebook friends and through the analysis of the participants’ Facebook profiles and activities over a period of six months. The data is analysed with an emphasis on the identity formation of participants through the recognition of the underlying themes. The exploration of online interviews is on the basis of participants’ personal accounts of self-censorship and cultural understanding through using social media. The driven codes and themes have been categorised considering censorship and place of culture on representation of self. Participants were asked to explain their views about censorship and conservatism through using social media. They reported their thoughts about deciding which content to share on Facebook and which to self-censor and their reasons behind these decisions. The codes and themes have been categorised considering censorship and its role in representation of idealised self. The ‘actual self’ showed to be hidden by an individual for different reasons such as its influence on their social status, academic achievements and job opportunities. It is hoped that this research will have implications for education contexts in countries that are experiencing social media filtering by offering an increased understanding of the importance of online communities; which can provide an educational environment to talk and learn about social taboos and constructing adults’ identity in virtual environment and through cultural self-presentation.

Keywords: cultural identity, identity formation, online communities, self-censorship

Procedia PDF Downloads 234
782 Application of Ground-Penetrating Radar in Environmental Hazards

Authors: Kambiz Teimour Najad

Abstract:

The basic methodology of GPR involves the use of a transmitting antenna to send electromagnetic waves into the subsurface, which then bounce back to the surface and are detected by a receiving antenna. The transmitter and receiver antennas are typically placed on the ground surface and moved across the area of interest to create a profile of the subsurface. The GPR system consists of a control unit that powers the antennas and records the data, as well as a display unit that shows the results of the survey. The control unit sends a pulse of electromagnetic energy into the ground, which propagates through the soil or rock until it encounters a change in material or structure. When the electromagnetic wave encounters a buried object or structure, some of the energy is reflected back to the surface and detected by the receiving antenna. The GPR data is then processed using specialized software that analyzes the amplitude and travel time of the reflected waves. By interpreting the data, GPR can provide information on the depth, location, and nature of subsurface features and structures. GPR has several advantages over other geophysical survey methods, including its ability to provide high-resolution images of the subsurface and its non-invasive nature, which minimizes disruption to the site. However, the effectiveness of GPR depends on several factors, including the type of soil or rock, the depth of the features being investigated, and the frequency of the electromagnetic waves used. In environmental hazard assessments, GPR can be used to detect buried structures, such as underground storage tanks, pipelines, or utilities, which may pose a risk of contamination to the surrounding soil or groundwater. GPR can also be used to assess soil stability by identifying areas of subsurface voids or sinkholes, which can lead to the collapse of the surface. Additionally, GPR can be used to map the extent and movement of groundwater contamination, which is critical in designing effective remediation strategies. the methodology of GPR in environmental hazard assessments involves the use of electromagnetic waves to create high of the subsurface, which are then analyzed to provide information on the depth, location, and nature of subsurface features and structures. This information is critical in identifying and mitigating environmental hazards, and the non-invasive nature of GPR makes it a valuable tool in this field.

Keywords: GPR, hazard, landslide, rock fall, contamination

Procedia PDF Downloads 75
781 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 113
780 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind Systems

Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar

Abstract:

This paper presents fenestration analysis to study the balance between utilizing daylight and eliminating the disturbing parameters in a private office room with interior venetian blinds taking into account different slat angles. Mean luminance of the scene and window, luminance ratio of the workplane and window, work plane illumination and daylight glare probability(DGP) were calculated as a function of venetian blind design properties. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based evalglare and hdrscope help to investigate luminance-based metrics. A total of Eight-day measurement experiment was conducted to investigate the impact of different venetian blind angles in an office environment under daylight condition in Serdang, Malaysia. Detailed result for the selected case study showed that artificial lighting is necessary during the morning session for Malaysian buildings with southwest windows regardless of the venetian blind’s slat angle. However, in some conditions of afternoon session the workplane illuminance level exceeds the maximum illuminance of 2000 lx such as 10° and 40° slat angles. Generally, a rising trend is discovered toward mean window luminance level during the day. All the conditions have less than 10% of the pixels exceeding 2000 cd/m² before 1:00 P.M. However, 40% of the selected hours have more than 10% of the scene pixels higher than 2000 cd/m² after 1:00 P.M. Surprisingly in no blind condition, there is no extreme case of window/task ratio, However, the extreme cases happen for 20°, 30°, 40° and 50° slat angles. As expected mean window luminance level is higher than 2000 cd/m² after 2:00 P.M for most cases except 60° slat angle condition. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment, due to the window’s direction, location of the building and studied workplane. Specifically, this paper reviews different blind angle’s response to the suggested metrics by the previous standards, and finally conclusions and knowledge gaps are summarized and suggested next steps for research are provided. Addressing these gaps is critical for the continued progress of the energy efficiency movement.

Keywords: daylighting, office environment, energy simulation, venetian blind

Procedia PDF Downloads 224
779 Vegetation Assessment Under the Influence of Environmental Variables; A Case Study from the Yakhtangay Hill of Himalayan Range, Pakistan

Authors: Hameed Ullah, Shujaul Mulk Khan, Zahid Ullah, Zeeshan Ahmad Sadia Jahangir, Abdullah, Amin Ur Rahman, Muhammad Suliman, Dost Muhammad

Abstract:

The interrelationship between vegetation and abiotic variables inside an ecosystem is one of the main jobs of plant scientists. This study was designed to investigate the vegetation structure and species diversity along with the environmental variables in the Yakhtangay hill district Shangla of the Himalayan Mountain series Pakistan by using multivariate statistical analysis. Quadrat’s method was used and a total of 171 Quadrats were laid down 57 for Tree, Shrubs and Herbs, respectively, to analyze the phytosociological attributes of the vegetation. The vegetation of the selected area was classified into different Life and leaf-forms according to Raunkiaer classification, while PCORD software version 5 was used to classify the vegetation into different plants communities by Two-way indicator species Analysis (TWINSPAN). The CANOCCO version 4.5 was used for DCA and CCA analysis to find out variation directories of vegetation with different environmental variables. A total of 114 plants species belonging to 45 different families was investigated inside the area. The Rosaceae (12 species) was the dominant family followed by Poaceae (10 species) and then Asteraceae (7 species). Monocots were more dominant than Dicots and Angiosperms were more dominant than Gymnosperms. Among the life forms the Hemicryptophytes and Nanophanerophytes were dominant, followed by Therophytes, while among the leaf forms Microphylls were dominant, followed by Leptophylls. It is concluded that among the edaphic factors such as soil pH, the concentration of soil organic matter, Calcium Carbonates concentration in soil, soil EC, soil TDS, and physiographic factors such as Altitude and slope are affecting the structure of vegetation, species composition and species diversity at the significant level with p-value ≤0.05. The Vegetation of the selected area was classified into four major plants communities and the indicator species for each community was recorded. Classification of plants into 4 different communities based upon edaphic gradients favors the individualistic hypothesis. Indicator Species Analysis (ISA) shows the indicators of the study area are mostly indicators to the Himalayan or moist temperate ecosystem, furthermore, these indicators could be considered for micro-habitat conservation and respective ecosystem management plans.

Keywords: species richness, edaphic gradients, canonical correspondence analysis (CCA), TWCA

Procedia PDF Downloads 149
778 Investigation of the Association of Vitamin D Receptor Gene Polymorphism in Female Genital: Tuberculosis Cases

Authors: Swati Gautam, Amita Jain, Shyampyari Jaiswar

Abstract:

Objective: To elucidate the role of (ApaI&TaqI) VDR gene polymorphism in the pathogenesis of female genital tuberculosis (FGTB) cases. Background: Female genital TB represents about 15-20% of total extra-pulmonary TB (EPTB). Female subjects with vitamin D deficiency have been shown to be at higher risk of pulmonary TB as well as FGTB. In same context few functional polymorphism in vitamin D receptor (VDR) gene has been considered as an important genetic risk factor that modulate the development of FGTB. Therefore we aimed, to elucidate the role of (ApaI&TaqI) VDR gene polymorphism in the pathogenesis of FGTB. Study design: Case-Control study. Sample size: Cases (60) and Controls (60). Study site: Department of Obstetrics & Gynecology & Department of Microbiology, K.G.M.U. Lucknow, (UP). Inclusion criteria: Cases: Women with age group 20-35 years, premenstrual endometrial aspiration collected and included in the study, those were positive with acid-fast bacilli (AFB)/ TB-PCR/ LJ culture/ liquid culture. Controls: Women with age group 20-35 years having no history of ATT and all test negative for TB recruited as control. Exclusion criteria: -Women with endometriosis, polycystic ovaries (PCOD), positive on Chlamydia & gonorrhea, already on anti-tubercular therapy (ATT) excluded. Materials and Methods: Blood samples were collected in EDTA tubes from cases and controls stored at -20ºC. Genomic DNA extraction was carried out by salting-out method. Genotyping of VDR gene (ApaI&TaqI) polymorphism was performed by using single amplification refractory mutation system (ARMS) PCR technique. PCR products were analyzed by electrophoresis on 2% agarose gel. Statistical analysis was done by SPSS16.3 software & computing odds ratio (OR) with 95% CI. Results: Increased risk of female genital tuberculosis was observed in AA genotype (OR =1.1419-6.212 95% CI, P*<0.036) and A allele (OR =1.255-3.518, 95% CI, P* < 0.006) in FGTB as compared to controls. Moreover A allele was found more frequent in FGTB patients. No significant difference was observed in TaqI gene polymorphism of VDR gene. Conclusion: The ApaI polymorphism is significantly associated with etiology of FGTB and plays an important role as a genetic risk factor in FGTB women.

Keywords: ARMS, ATT, EPTB, FGTB, VDR

Procedia PDF Downloads 282
777 Kinematics and Dynamics Analysis of Crank-Piston System of a High-Power, Nine-Cylinder Aircraft Engine

Authors: Michal Biały, Konrad Pietrykowski, Rafal Sochaczewski

Abstract:

The kinematics and dynamics analysis of crank-piston system of aircraft engine. The object of the study was the high power aircraft engine ASz 62-IR. This engine is produced by a Polish company WSK "PZL-KALISZ" S.A.". All analyzes were performed numerically using CAD and CAE environment. Three-dimensional model of the crank-piston system was developed based on real engine located in the Laboratory of Centre of Innovation and Advanced Technologies of Lublin University of Technology. During the development of the model, the technique of reverse engineering - 3D scanning was used. ASz 62-IR engine is characterized by a radial type of crank-piston system. In this system the cylinders are arranged radially around the circle. This crank-piston system consists of a main connecting rod and eight additional connecting rods. In addition, three-dimensional model consists of a piston pins, pistons and piston rings. As a result of the specific engine design, characteristics of the piston individual movement are slightly different from each other. But the model assumes that they are the same during the analysis. Three-dimensional model of the engine was implemented into the MSC Adams software. The environment of MSC Adams allows for multibody simulation of the dynamic phenomena. This determines the state parameters of the moving elements, among which the load or force distribution on each kinematic node can be distinguished. Materials and characteristic materials parameters were adopted on the basis of commonly used materials for engine parts. The mass values of individual elements were adopted on the basis of real engine parts. The piston gas forces were replaced by calculation of pressure variations recorded during engine tests on the engine test bench. The research the changes of forces acting in the individual kinematic pairs of crank-piston system. The model allows to determine the load on the crankshaft main bearings. This gives the possibility for the main supports forces analysis The model allows for testing and simulation of kinematics and dynamics of a radial aircraft engine. This is the first stage of the work, which aims to numerical simulation of vibration of multi-cylinder aircraft engine. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: aircraft engine, CAD, CAE, dynamics, kinematics, MSC Adams, numerical simulation

Procedia PDF Downloads 383
776 Automated, Objective Assessment of Pilot Performance in Simulated Environment

Authors: Maciej Zasuwa, Grzegorz Ptasinski, Antoni Kopyt

Abstract:

Nowadays flight simulators offer tremendous possibilities for safe and cost-effective pilot training, by utilization of powerful, computational tools. Due to technology outpacing methodology, vast majority of training related work is done by human instructors. It makes assessment not efficient, and vulnerable to instructors’ subjectivity. The research presents an Objective Assessment Tool (gOAT) developed at the Warsaw University of Technology, and tested on SW-4 helicopter flight simulator. The tool uses database of the predefined manoeuvres, defined and integrated to the virtual environment. These were implemented, basing on Aeronautical Design Standard Performance Specification Handling Qualities Requirements for Military Rotorcraft (ADS-33), with predefined Mission-Task-Elements (MTEs). The core element of the gOAT enhanced algorithm that provides instructor a new set of information. In details, a set of objective flight parameters fused with report about psychophysical state of the pilot. While the pilot performs the task, the gOAT system automatically calculates performance using the embedded algorithms, data registered by the simulator software (position, orientation, velocity, etc.), as well as measurements of physiological changes of pilot’s psychophysiological state (temperature, sweating, heart rate). Complete set of measurements is presented on-line to instructor’s station and shown in dedicated graphical interface. The presented tool is based on open source solutions, and flexible for editing. Additional manoeuvres can be easily added using guide developed by authors, and MTEs can be changed by instructor even during an exercise. Algorithm and measurements used allow not only to implement basic stress level measurements, but also to reduce instructor’s workload significantly. Tool developed can be used for training purpose, as well as periodical checks of the aircrew. Flexibility and ease of modifications allow the further development to be wide ranged, and the tool to be customized. Depending on simulation purpose, gOAT can be adjusted to support simulator of aircraft, helicopter, or unmanned aerial vehicle (UAV).

Keywords: automated assessment, flight simulator, human factors, pilot training

Procedia PDF Downloads 147
775 QSAR Study on Diverse Compounds for Effects on Thermal Stability of a Monoclonal Antibody

Authors: Olubukayo-Opeyemi Oyetayo, Oscar Mendez-Lucio, Andreas Bender, Hans Kiefer

Abstract:

The thermal melting curve of a protein provides information on its conformational stability and could provide cues on its aggregation behavior. Naturally-occurring osmolytes have been shown to improve the thermal stability of most proteins in a concentration-dependent manner. They are therefore commonly employed as additives in therapeutic protein purification and formulation. A number of intertwined and seemingly conflicting mechanisms have been put forward to explain the observed stabilizing effects, the most prominent being the preferential exclusion mechanism. We attempted to probe and summarize molecular mechanisms for thermal stabilization of a monoclonal antibody (mAb) by developing quantitative structure-activity relationships using a rationally-selected library of 120 osmolyte-like compounds in the polyhydric alcohols, amino acids and methylamines classes. Thermal stabilization potencies were experimentally determined by thermal shift assays based on differential scanning fluorimetry. The cross-validated QSAR model was developed by partial least squares regression using descriptors generated from Molecular Operating Environment software. Careful evaluation of the results with the use of variable importance in projection parameter (VIP) and regression coefficients guided the selection of the most relevant descriptors influencing mAb thermal stability. For the mAb studied and at pH 7, the thermal stabilization effects of tested compounds correlated positively with their fractional polar surface area and inversely with their fractional hydrophobic surface area. We cannot claim that the observed trends are universal for osmolyte-protein interactions because of protein-specific effects, however this approach should guide the quick selection of (de)stabilizing compounds for a protein from a chemical library. Further work with a large variety of proteins and at different pH values would help the derivation of a solid explanation as to the nature of favorable osmolyte-protein interactions for improved thermal stability. This approach may be beneficial in the design of novel protein stabilizers with optimal property values, especially when the influence of solution conditions like the pH and buffer species and the protein properties are factored in.

Keywords: thermal stability, monoclonal antibodies, quantitative structure-activity relationships, osmolytes

Procedia PDF Downloads 327