Search results for: applied biomechanics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8444

Search results for: applied biomechanics

1214 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 87
1213 Changing Behaviour in the Digital Era: A Concrete Use Case from the Domain of Health

Authors: Francesca Spagnoli, Shenja van der Graaf, Pieter Ballon

Abstract:

Humans do not behave rationally. We are emotional, easily influenced by others, as well as by our context. The study of human behaviour became a supreme endeavour within many academic disciplines, including economics, sociology, and clinical and social psychology. Understanding what motivates humans and triggers them to perform certain activities, and what it takes to change their behaviour, is central both for researchers and companies, as well as policy makers to implement efficient public policies. While numerous theoretical approaches for diverse domains such as health, retail, environment have been developed, the methodological models guiding the evaluation of such research have reached for a long time their limits. Within this context, digitisation, the Information and communication technologies (ICT) and wearable, the Internet of Things (IoT) connecting networks of devices, and new possibilities to collect and analyse massive amounts of data made it possible to study behaviour from a realistic perspective, as never before. Digital technologies make it possible to (1) capture data in real-life settings, (2) regain control over data by capturing the context of behaviour, and (3) analyse huge set of information through continuous measurement. Within this complex context, this paper describes a new framework for initiating behavioural change, capitalising on the digital developments in applied research projects and applicable both to academia, enterprises and policy makers. By applying this model, behavioural research can be conducted to address the issues of different domains, such as mobility, environment, health or media. The Modular Behavioural Analysis Approach (MBAA) is here described and firstly validated through a concrete use case within the domain of health. The results gathered have proven that disclosing information about health in connection with the use of digital apps for health, can be a leverage for changing behaviour, but it is only a first component requiring further follow-up actions. To this end, a clear definition of different 'behavioural profiles', towards which addressing several typologies of interventions, it is essential to effectively enable behavioural change. In the refined version of the MBAA a strong focus will rely on defining a methodology for shaping 'behavioural profiles' and related interventions, as well as the evaluation of side-effects on the creation of new business models and sustainability plans.

Keywords: behavioural change, framework, health, nudging, sustainability

Procedia PDF Downloads 221
1212 The Decline of National Sovereignty in Light of the International Transformations

Authors: Djehich Mohamed Yousri

Abstract:

The national sovereignty of states is now facing a dangerous situation that has witnessed a clear exacerbation of the restrictions that this sovereignty has known for quite some time, if not since the establishment of the sovereign national state in the first place, and things have reached this way to the extent that a group of analysts and commentators are talking about the demise or disappearance of the phenomenon of sovereignty Patriotism, a judgment that some consider exaggerated, although there is agreement on the seriousness of what has afflicted the national sovereignty of medium and small states in particular. In fact, the phenomenon of national sovereignty has not completely ended, as there is still a category of countries that are able to disagree with the American will without disappearing from the world map, as happened with the Soviet Union. China, some European countries, and some countries with leading regional roles are still able to deal with This administration, with rational and complex calculations, makes the restrictions on its sovereignty minimal, or at least draws a red line in front of the vital interests of those countries that the restrictions on sovereignty cannot cross, and it is certain that strengthening internal democratic development in countries will increase their ability to challenge external restrictions. On its sovereignty to the extent that this development creates a cohesive society in the face of external hegemony attempts, as well as to the extent that it eliminates some pretexts for interference in the internal affairs of states, including the claim of a lack of democracy or lack of respect for human rights in it. What led to transformations in the international arena in the wake of globalization and its effects on international aspects, including national sovereignty and the principle of state independence. Which was marred by several currents, which led to affecting it in a negative way, and this is what poor countries suffer from at the expense of rich countries, which led us to research the extent of the presence of national sovereignty on the international arena, and the extent to which the principle of non-interference in affairs is applied or existed. The internal affairs of states, which are stipulated in the Charter of the United Nations in the modern era, the theory of sovereignty has been subjected to substantial criticism and abandonment by many on the grounds that it is inconsistent with the current conditions of the international community. In fact, the theory of sovereignty has been misused to justify internal tyranny and international chaos. This theory has hindered the development of international law, the work of international organizations and the dominance of strong states over weak ones. At the present time, the concept of sovereignty has moved towards direction, as the transformations of the international system in the economic, political and military fields have led to the decline and erosion of the idea of the sovereignty of the national state.

Keywords: sovereignty, intervention, non-interference, globalization, humanitarian intervention

Procedia PDF Downloads 66
1211 Production Optimization under Geological Uncertainty Using Distance-Based Clustering

Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe

Abstract:

It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.

Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization

Procedia PDF Downloads 143
1210 Mapping the Turbulence Intensity and Excess Energy Available to Small Wind Systems over 4 Major UK Cities

Authors: Francis C. Emejeamara, Alison S. Tomlin, James Gooding

Abstract:

Due to the highly turbulent nature of urban air flows, and by virtue of the fact that turbines are likely to be located within the roughness sublayer of the urban boundary layer, proposed urban wind installations are faced with major challenges compared to rural installations. The challenge of operating within turbulent winds can however, be counteracted by the development of suitable gust tracking solutions. In order to assess the cost effectiveness of such controls, a detailed understanding of the urban wind resource, including its turbulent characteristics, is required. Estimating the ambient turbulence and total kinetic energy available at different control response times is essential in evaluating the potential performance of wind systems within the urban environment should effective control solutions be employed. However, high resolution wind measurements within the urban roughness sub-layer are uncommon, and detailed CFD modelling approaches are too computationally expensive to apply routinely on a city wide scale. This paper therefore presents an alternative semi-empirical methodology for estimating the excess energy content (EEC) present in the complex and gusty urban wind. An analytical methodology for predicting the total wind energy available at a potential turbine site is proposed by assessing the relationship between turbulence intensities and EEC, for different control response times. The semi-empirical model is then incorporated with an analytical methodology that was initially developed to predict mean wind speeds at various heights within the built environment based on detailed mapping of its aerodynamic characteristics. Based on the current methodology, additional estimates of turbulence intensities and EEC allow a more complete assessment of the available wind resource. The methodology is applied to 4 UK cities with results showing the potential of mapping turbulence intensities and the total wind energy available at different heights within each city. Considering the effect of ambient turbulence and choice of wind system, the wind resource over neighbourhood regions (of 250 m uniform resolution) and building rooftops within the 4 cities were assessed with results highlighting the promise of mapping potential turbine sites within each city.

Keywords: excess energy content, small-scale wind, turbulence intensity, urban wind energy, wind resource assessment

Procedia PDF Downloads 474
1209 Reduction of Residual Stress by Variothermal Processing and Validation via Birefringence Measurement Technique on Injection Molded Polycarbonate Samples

Authors: Christoph Lohr, Hanna Wund, Peter Elsner, Kay André Weidenmann

Abstract:

Injection molding is one of the most commonly used techniques in the industrial polymer processing. In the conventional process of injection molding, the liquid polymer is injected into the cavity of the mold, where the polymer directly starts hardening at the cooled walls. To compensate the shrinkage, which is caused predominantly by the immediate cooling, holding pressure is applied. Through that whole process, residual stresses are produced by the temperature difference of the polymer melt and the injection mold and the relocation of the polymer chains, which were oriented by the high process pressures and injection speeds. These residual stresses often weaken or change the structural behavior of the parts or lead to deformation of components. One solution to reduce the residual stresses is the use of variothermal processing. Hereby the mold is heated – i.e. near/over the glass transition temperature of the polymer – the polymer is injected and before opening the mold and ejecting the part the mold is cooled. For the next cycle, the mold gets heated again and the procedure repeats. The rapid heating and cooling of the mold are realized indirectly by convection of heated and cooled liquid (here: water) which is pumped through fluid channels underneath the mold surface. In this paper, the influences of variothermal processing on the residual stresses are analyzed with samples in a larger scale (500 mm x 250 mm x 4 mm). In addition, the influence on functional elements, such as abrupt changes in wall thickness, bosses, and ribs, on the residual stress is examined. Therefore the polycarbonate samples are produced by variothermal and isothermal processing. The melt is injected into a heated mold, which has in our case a temperature varying between 70 °C and 160 °C. After the filling of the cavity, the closed mold is cooled down varying from 70 °C to 100 °C. The pressure and temperature inside the mold are monitored and evaluated with cavity sensors. The residual stresses of the produced samples are illustrated by birefringence where the effect on the refractive index on the polymer under stress is used. The colorful spectrum can be uncovered by placing the sample between a polarized light source and a second polarization filter. To show the achievement and processing effects on the reduction of residual stress the birefringence images of the isothermal and variothermal produced samples are compared and evaluated. In this comparison to the variothermal produced samples have a lower amount of maxima of each color spectrum than the isothermal produced samples, which concludes that the residual stress of the variothermal produced samples is lower.

Keywords: birefringence, injection molding, polycarbonate, residual stress, variothermal processing

Procedia PDF Downloads 283
1208 The Effects of Molecular and Climatic Variability on the Occurrence of Aspergillus Species and Aflatoxin Production in Commercial Maize from Different Agro-climatic Regions in South Africa

Authors: Nji Queenta Ngum, Mwanza Mulunda

Abstract:

Introduction Most African research reports on the frequent aflatoxin contamination of various foodstuffs, with researchers rarely specifying which of the Aspergillus species are present in these commodities. Numerous research works provide evidence of the ability of fungi to grow, thrive, and interact with other crop species and focus on the fact that these processes are largely affected by climatic variables. South Africa is a water-stressed country with high spatio-temporal rainfall variability; moreover, temperatures have been projected to rise at a rate twice the global rate. This weather pattern change may lead to crop stress encouraging mold contamination with subsequent mycotoxin production. In this study, the biodiversity and distribution of Aspergillus species with their corresponding toxins in maize from six distinct maize producing regions with different weather patterns in South Africa were investigated. Materials And Methods By applying cultural and molecular methods, a total of 1028 maize samples from six distinct agro-climatic regions were examined for contamination by the Aspergillus species while the high performance liquid chromatography (HPLC) method was applied to analyse the level of contamination by aflatoxins. Results About 30% of the overall maize samples were contaminated by at least one Aspergillus species. Less than 30% (28.95%) of the 228 isolates subjected to the aflatoxigenic test was found to possess at least one of the aflatoxin biosynthetic genes. Furthermore, almost 20% were found to be contaminated with aflatoxins, with mean total aflatoxin concentration levels of 64.17 ppb. Amongst the contaminated samples, 59.02% had mean total aflatoxin concentration levels above the SA regulatory limit of 20ppb for animals and 10 for human consumption. Conclusion In this study, climate variables (rainfall reduction) were found to significantly (p<0.001) influence the occurrence of the Aspergillus species (especially Aspergillus fumigatus) and the production of aflatoxin in South Africa commercial maize by maize variety, year of cultivation as well as the agro-climatic region in which the maize is cultivated. This included, amongst others, a reduction in the average annual rainfall of the preceding year to about 21.27 mm, and, as opposed to other regions whose average maximum rainfall ranged between 37.24 – 44.1 mm, resulted in a significant increase in the aflatoxin contamination of maize.

Keywords: aspergillus species, aflatoxins, diversity, drought, food safety, HPLC and PCR techniques

Procedia PDF Downloads 76
1207 Virtual Screening and in Silico Toxicity Property Prediction of Compounds against Mycobacterium tuberculosis Lipoate Protein Ligase B (LipB)

Authors: Junie B. Billones, Maria Constancia O. Carrillo, Voltaire G. Organo, Stephani Joy Y. Macalino, Inno A. Emnacen, Jamie Bernadette A. Sy

Abstract:

The drug discovery and development process is generally known to be a very lengthy and labor-intensive process. Therefore, in order to be able to deliver prompt and effective responses to cure certain diseases, there is an urgent need to reduce the time and resources needed to design, develop, and optimize potential drugs. Computer-aided drug design (CADD) is able to alleviate this issue by applying computational power in order to streamline the whole drug discovery process, starting from target identification to lead optimization. This drug design approach can be predominantly applied to diseases that cause major public health concerns, such as tuberculosis. Hitherto, there has been no concrete cure for this disease, especially with the continuing emergence of drug resistant strains. In this study, CADD is employed for tuberculosis by first identifying a key enzyme in the mycobacterium’s metabolic pathway that would make a good drug target. One such potential target is the lipoate protein ligase B enzyme (LipB), which is a key enzyme in the M. tuberculosis metabolic pathway involved in the biosynthesis of the lipoic acid cofactor. Its expression is considerably up-regulated in patients with multi-drug resistant tuberculosis (MDR-TB) and it has no known back-up mechanism that can take over its function when inhibited, making it an extremely attractive target. Using cutting-edge computational methods, compounds from AnalytiCon Discovery Natural Derivatives database were screened and docked against the LipB enzyme in order to rank them based on their binding affinities. Compounds which have better binding affinities than LipB’s known inhibitor, decanoic acid, were subjected to in silico toxicity evaluation using the ADMET and TOPKAT protocols. Out of the 31,692 compounds in the database, 112 of these showed better binding energies than decanoic acid. Furthermore, 12 out of the 112 compounds showed highly promising ADMET and TOPKAT properties. Future studies involving in vitro or in vivo bioassays may be done to further confirm the therapeutic efficacy of these 12 compounds, which eventually may then lead to a novel class of anti-tuberculosis drugs.

Keywords: pharmacophore, molecular docking, lipoate protein ligase B (LipB), ADMET, TOPKAT

Procedia PDF Downloads 423
1206 The Effect of Bisphenol A and Its Selected Analogues on Antioxidant Enzymes Activity in Human Erythrocytes

Authors: Aneta Maćczak, Bożena Bukowska, Jaromir Michałowicz

Abstract:

Bisphenols are one of the most widely used chemical compounds worldwide. They are used in the manufacturing of polycarbonates, epoxy resins and thermal paper which are applied in plastic containers, bottles, cans, newspapers, receipt and other products. Among these compounds, bisphenol A (BPA) is produced in the highest amounts. There are concerns about endocrine impact of BPA and its other toxic effects including hepatotoxicity, neurotoxicity and carcinogenicity on human organism. Moreover, BPA is supposed to increase the incidence the obesity, diabetes and heart disease. For this reason the use of BPA in the production of plastic infant feeding bottles and some other consumers products has been restricted in the European Union and the United States. Nowadays, BPA analogues like bisphenol F (BPF) and bisphenol S (BPS) have been developed as alternative compounds. The replacement of BPA with other bisphenols contributed to the increase of the exposure of human population to these substances. Toxicological studies have mainly focused on BPA. In opposite, a small number of studies concerning toxic effects of BPA analogues have been realized, which makes impossible to state whether those substituents are safe for human health. Up to now, the mechanism of bisphenols action on the erythrocytes has not been elucidated. That is why, the aim of this study was to assess the effect of BPA and its selected analogues such as BPF and BPS on the activity of antioxidant enzymes, i.e. catalase (EC 1.11.1.6.), glutathione peroxidase (E.C.1.11.1.9) and superoxide dismutase (EC.1.15.1.1) in human erythrocytes. Red blood cells in respect to their function (transport of oxygen) and very well developed enzymatic and non-enzymatic antioxidative system, are useful cellular model to assess changes in redox balance. Erythrocytes were incubated with BPA, BPF and BPS in the concentration ranging from 0.5 to 100 µg/ml for 24 h. The activity of catalase was determined by the method of Aebi (1984). The activity of glutathione peroxidase was measured according to the method described by Rice-Evans et al. (1991), while the activity of superoxide dismutase (EC.1.15.1.1) was determined by the method of Misra and Fridovich (1972). The results showed that BPA and BPF caused changes in the antioxidative enzymes activities. BPA decreased the activity of examined enzymes in the concentration of 100 µg/ml. We also noted that BPF decreased the activity of catalase (5-100 µg/ml), glutathione peroxidase (50-100 µg/ml) and superoxide dismutase (25-100 µg/ml), while BPS did not cause statistically significant changes in investigated parameters. The obtained results suggest that BPA and BPF disrupt redox balance in human erythrocytes but the observed changes may occur in human organism only during occupational or subacute exposure to these substances.

Keywords: antioxidant enzymes, bisphenol A, bisphenol a analogues, human erythrocytes

Procedia PDF Downloads 471
1205 Leukocyte Transcriptome Analysis of Patients with Obesity-Related High Output Heart Failure

Authors: Samantha A. Cintron, Janet Pierce, Mihaela E. Sardiu, Diane Mahoney, Jill Peltzer, Bhanu Gupta, Qiuhua Shen

Abstract:

High output heart failure (HOHF) is characterized a high output state resulting from an underlying disease process and is commonly caused by obesity. As obesity levels increase, more individuals will be at risk for obesity-related HOHF. However, the underlying pathophysiologic mechanisms of obesity-related HOHF are not well understood and need further research. The aim of the study was to describe the differences in leukocyte transcriptomes of morbidly obese patients with HOHF and those with non-HOHF. In this cross-sectional study, the study team collected blood samples, demographics, and clinical data of six patients with morbid obesity and HOHF and six patients with morbid obesity and non-HOHF. The study team isolated the peripheral blood leukocyte RNA and applied stranded total RNA sequencing. Differential gene expression was calculated, and Ingenuity Pathway Analysis software was used to interpret the canonical pathways, functional changes, upstream regulators, and mechanistic and causal networks that were associated with the significantly different leukocyte transcriptomes. The study team identified 116 differentially expressed genes; 114 were upregulated, and 2 were downregulated in the HOHF group (Benjamini-Hochberg adjusted p-value ≤ 0.05 and log2(fold-change) of ±1). The differentially expressed genes were involved with cell proliferation, mitochondrial function, erythropoiesis, erythrocyte stability, and apoptosis. The top upregulated canonical pathways associated with differentially expressed genes were autophagy, adenosine monophosphate-activated protein kinase signaling, and senescence pathways. Upstream regulator GATA Binding Protein 1 (GATA1) and a network associated with nuclear factor kappa-light chain-enhancer of activated B cells (NF-kB) were also identified based on the different leukocyte transcriptomes of morbidly obese patients with HOHF and non-HOHF. To the author’s best knowledge, this is the first study that reported the differential gene expression in patients with obesity-related HOHF and demonstrated the unique pathophysiologic mechanisms underlying the disease. Further research is needed to determine the role of cellular function and maintenance, inflammation, and iron homeostasis in obesity-related HOHF.

Keywords: cardiac output, heart failure, obesity, transcriptomics

Procedia PDF Downloads 55
1204 Soil Quality Response to Long-Term Intensive Resources Management and Soil Texture

Authors: Dalia Feiziene, Virginijus Feiza, Agne Putramentaite, Jonas Volungevicius, Kristina Amaleviciute, Sarunas Antanaitis

Abstract:

The investigations on soil conservation are one of the most important topics in modern agronomy. Soil management practices have great influence on soil physico-chemical quality and GHG emission. Research objective: To reveal the sensitivity and vitality of soils with different texture to long-term antropogenisation on Cambisol in Central Lithuania and to compare them with not antropogenised soil resources. Methods: Two long-term field experiments (loam on loam; sandy loam on loam) with different management intensity were estimated. Disturbed and undisturbed soil samples were collected from 5-10, 15-20 and 30-35 cm depths. Soil available P and K contents were determined by ammonium lactate extraction, total N by the dry combustion method, SOC content by Tyurin titrimetric (classical) method, texture by pipette method. In undisturbed core samples soil pore volume distribution, plant available water (PAW) content were determined. A closed chamber method was applied to quantify soil respiration (SR). Results: Long-term resources management changed soil quality. In soil with loam texture, within 0-10, 10-20 and 30-35 cm soil layers, significantly higher PAW, SOC and mesoporosity (MsP) were under no-tillage (NT) than under conventional tillage (CT). However, total porosity (TP) under NT was significantly higher only in 0-10 cm layer. MsP acted as dominant factor for N, P and K accumulation in adequate layers. P content in all soil layers was higher under NT than in CT. N and K contents were significantly higher than under CT only in 0-10 cm layer. In soil with sandy loam texture, significant increase in SOC, PAW, MsP, N, P and K under NT was only in 0-10 cm layer. TP under NT was significantly lower in all layers. PAW acted as strong dominant factor for N, P, K accumulation. The higher PAW the higher NPK contents were determined. NT did not secure chemical quality within deeper layers than CT. Long-term application of mineral fertilisers significantly increased SOC and soil NPK contents primarily in top-soil. Enlarged fertilization determined the significantly higher leaching of nutrients to deeper soil layers (CT) and increased hazards of top-soil pollution. Straw returning significantly increased SOC and NPK accumulation in top-soil. The SR on sandy loam was significantly higher than on loam. At dry weather conditions, on loam SR was higher in NT than in CT, on sandy loam SR was higher in CT than in NT. NPK fertilizers promoted significantly higher SR in both dry and wet year, but suppressed SR on sandy loam during usual year. Not antropogenised soil had similar SOC and NPK distribution within 0-35 cm layer and depended on genesis of soil profile horizons.

Keywords: fertilizers, long-term experiments, soil texture, soil tillage, straw

Procedia PDF Downloads 299
1203 How Is a Machine-Translated Literary Text Organized in Coherence? An Analysis Based upon Theme-Rheme Structure

Authors: Jiang Niu, Yue Jiang

Abstract:

With the ultimate goal to automatically generate translated texts with high quality, machine translation has made tremendous improvements. However, its translations of literary works are still plagued with problems in coherence, esp. the translation between distant language pairs. One of the causes of the problems is probably the lack of linguistic knowledge to be incorporated into the training of machine translation systems. In order to enable readers to better understand the problems of machine translation in coherence, to seek out the potential knowledge to be incorporated, and thus to improve the quality of machine translation products, this study applies Theme-Rheme structure to examine how a machine-translated literary text is organized and developed in terms of coherence. Theme-Rheme structure in Systemic Functional Linguistics is a useful tool for analysis of textual coherence. Theme is the departure point of a clause and Rheme is the rest of the clause. In a text, as Themes and Rhemes may be connected with each other in meaning, they form thematic and rhematic progressions throughout the text. Based on this structure, we can look into how a text is organized and developed in terms of coherence. Methodologically, we chose Chinese and English as the language pair to be studied. Specifically, we built a comparable corpus with two modes of English translations, viz. machine translation (MT) and human translation (HT) of one Chinese literary source text. The translated texts were annotated with Themes, Rhemes and their progressions throughout the texts. The annotated texts were analyzed from two respects, the different types of Themes functioning differently in achieving coherence, and the different types of thematic and rhematic progressions functioning differently in constructing texts. By analyzing and contrasting the two modes of translations, it is found that compared with the HT, 1) the MT features “pseudo-coherence”, with lots of ill-connected fragments of information using “and”; 2) the MT system produces a static and less interconnected text that reads like a list; these two points, in turn, lead to the less coherent organization and development of the MT than that of the HT; 3) novel to traditional and previous studies, Rhemes do contribute to textual connection and coherence though less than Themes do and thus are worthy of notice in further studies. Hence, the findings suggest that Theme-Rheme structure be applied to measuring and assessing the coherence of machine translation, to being incorporated into the training of the machine translation system, and Rheme be taken into account when studying the textual coherence of both MT and HT.

Keywords: coherence, corpus-based, literary translation, machine translation, Theme-Rheme structure

Procedia PDF Downloads 207
1202 Flash Flood in Gabes City (Tunisia): Hazard Mapping and Vulnerability Assessment

Authors: Habib Abida, Noura Dahri

Abstract:

Flash floods are among the most serious natural hazards that have disastrous environmental and human impacts. They are associated with exceptional rain events, characterized by short durations, very high intensities, rapid flows and small spatial extent. Flash floods happen very suddenly and are difficult to forecast. They generally cause damage to agricultural crops and property, infrastructures, and may even result in the loss of human lives. The city of Gabes (South-eastern Tunisia) has been exposed to numerous damaging floods because of its mild topography, clay soil, high urbanization rate and erratic rainfall distribution. The risks associated with this situation are expected to increase further in the future because of climate change, deemed responsible for the increase of the frequency and the severity of this natural hazard. Recently, exceptional events hit Gabes City causing death and major property losses. A major flooding event hit the region on June 2nd, 2014, causing human deaths and major material losses. It resulted in the stagnation of storm water in the numerous low zones of the study area, endangering thereby human health and causing disastrous environmental impacts. The characterization of flood risk in Gabes Watershed (South-eastern Tunisia) is considered an important step for flood management. Analytical Hierarchy Process (AHP) method coupled with Monte Carlo simulation and geographic information system were applied to delineate and characterize flood areas. A spatial database was developed based on geological map, digital elevation model, land use, and rainfall data in order to evaluate the different factors susceptible to affect flood analysis. Results obtained were validated by remote sensing data for the zones that showed very high flood hazard during the extreme rainfall event of June 2014 that hit the study basin. Moreover, a survey was conducted from different areas of the city in order to understand and explore the different causes of this disaster, its extent and its consequences.

Keywords: analytical hierarchy process, flash floods, Gabes, remote sensing, Tunisia

Procedia PDF Downloads 109
1201 Engage, Connect, Empower: Agile Approach in the University Students' Education

Authors: D. Bjelica, T. Slavinski, V. Vukimrovic, D. Pavlovic, D. Bodroza, V. Dabetic

Abstract:

Traditional methods and techniques used in higher education may be significantly persuasive on the university students' perception about quality of the teaching process. Students’ satisfaction with the university experience may be affected by chosen educational approaches. Contemporary project management trends recognize agile approaches' beneficial, so modern practice highlights their usage, especially in the IT industry. A key research question concerns the possibility of applying agile methods in youth education. As agile methodology pinpoint iteratively-incremental delivery of results, its employment could be remarkably fruitful in education. This paper demonstrates the agile concept's application in the university students’ education through the continuous delivery of student solutions. Therefore, based on the fundamental values and principles of the agile manifest, paper will analyze students' performance and learned lessons in their encounter with the agile environment. The research is based on qualitative and quantitative analysis that includes sprints, as preparation and realization of student tasks in shorter iterations. Consequently, the performance of student teams will be monitored through iterations, as well as the process of adaptive planning and realization. Grounded theory methodology has been used in this research, as so as descriptive statistics and Man Whitney and Kruskal Wallis test for group comparison. Developed constructs of the model will be showcase through qualitative research, then validated through a pilot survey, and eventually tested as a concept in the final survey. The paper highlights the variability of educational curricula based on university students' feedbacks, which will be collected at the end of every sprint and indicates to university students' satisfaction inconsistency according to approaches applied in education. Values delivered by the lecturers will also be continuously monitored; thus, it will be prioritizing in order to students' requests. Minimal viable product, as the early delivery of results, will be particularly emphasized in the implementation process. The paper offers both theoretical and practical implications. This research contains exceptional lessons that may be applicable by educational institutions in curriculum creation processes, or by lecturers in curriculum design and teaching. On the other hand, they can be beneficial regarding university students' satisfaction increscent in respect of teaching styles, gained knowledge, or even educational content.

Keywords: academic performances, agile, high education, university students' satisfaction

Procedia PDF Downloads 129
1200 3D Simulation of Orthodontic Tooth Movement in the Presence of Horizontal Bone Loss

Authors: Azin Zargham, Gholamreza Rouhi, Allahyar Geramy

Abstract:

One of the most prevalent types of alveolar bone loss is horizontal bone loss (HBL) in which the bone height around teeth is reduced homogenously. In the presence of HBL the magnitudes of forces during orthodontic treatment should be altered according to the degree of HBL, in a way that without further bone loss, desired tooth movement can be obtained. In order to investigate the appropriate orthodontic force system in the presence of HBL, a three-dimensional numerical model capable of the simulation of orthodontic tooth movement was developed. The main goal of this research was to evaluate the effect of different degrees of HBL on a long-term orthodontic tooth movement. Moreover, the effect of different force magnitudes on orthodontic tooth movement in the presence of HBL was studied. Five three-dimensional finite element models of a maxillary lateral incisor with 0 mm, 1.5 mm, 3 mm, 4.5 mm and 6 mm of HBL were constructed. The long-term orthodontic tooth tipping movements were attained during a 4-weeks period in an iterative process through the external remodeling of the alveolar bone based on strains in periodontal ligament as the bone remodeling mechanical stimulus. To obtain long-term orthodontic tooth movement in each iteration, first the strains in periodontal ligament under a 1-N tipping force were calculated using finite element analysis. Then, bone remodeling and the subsequent tooth movement were computed in a post-processing software using a custom written program. Incisal edge, cervical, and apical area displacement in the models with different alveolar bone heights (0, 1.5, 3, 4.5, 6 mm bone loss) in response to a 1-N tipping force were calculated. Maximum tooth displacement was found to be 2.65 mm at the top of the crown of the model with a 6 mm bone loss. Minimum tooth displacement was 0.45 mm at the cervical level of the model with a normal bone support. Tooth tipping degrees of models in response to different tipping force magnitudes were also calculated for models with different degrees of HBL. Degrees of tipping tooth movement increased as force level was increased. This increase was more prominent in the models with smaller degrees of HBL. By using finite element method and bone remodeling theories, this study indicated that in the presence of HBL, under the same load, long-term orthodontic tooth movement will increase. The simulation also revealed that even though tooth movement increases with increasing the force, this increase was only prominent in the models with smaller degrees of HBL, and tooth models with greater degrees of HBL will be less affected by the magnitude of an orthodontic force. Based on our results, the applied force magnitude must be reduced in proportion of degree of HBL.

Keywords: bone remodeling, finite element method, horizontal bone loss, orthodontic tooth movement.

Procedia PDF Downloads 342
1199 Ethnic Xenophobia as Symbolic Politics: An Explanation of Anti-Migrant Activity from Brussels to Beirut

Authors: Annamarie Rannou, Horace Bartilow

Abstract:

Global concerns about xenophobic activity are on the rise across developed and developing countries. And yet, social science scholarship has almost exclusively examined xenophobia as a prejudice of advanced western nations. This research argues that the fields of study related to xenophobia must be re-conceptualized within a framework of ethnicity in order to level the playing field for cross-regional inquiry. This study develops a new concept of ethnic xenophobia and integrates existing explanations of anti-migrant expression into theories of ethnic threat. We argue specifically that political elites convert economic, political, and social threats at the national level into ethnic xenophobic activity in order to gain or maintain political advantage among their native selectorate. We expand on Stuart Kaufman’s theory of symbolic politics to underscore the methods of mobilization used against migrants and the power of elite discourse in moments of national crises. An original dataset is used to examine over 35,000 cases of ethnic xenophobic activity targeting refugees. Wordscores software is used to develop a unique measure of anti-migrant elite rhetoric which captures the symbolic discourse of elites in their mobilization of ethnic xenophobic activism. We use a Structural Equation Model (SEM) to test the causal pathways of the theory across seventy-two developed and developing countries from 1990 to 2016. A framework of Most Different Systems Design (MDSD) is also applied to two pairs of developed-developing country cases, including Kenya and the Netherlands and Lebanon and the United States. This study sheds tremendous light on an underrepresented area of comparative research in migration studies. It shows that the causal elements of anti-migrant activity are far more similar than existing research suggests which has major implications for policy makers, practitioners, and academics in fields of migration protection and advocacy. It speaks directly to the mobilization of myths surrounding refugees, in particular, and the nationalization of narratives of migration that may be neutralized by the development of deeper associational relationships between natives and migrants.

Keywords: refugees, ethnicity, symbolic politics, elites, migration, comparative politics

Procedia PDF Downloads 145
1198 Assessment of Impact of Urbanization in High Mountain Urban Watersheds

Authors: D. M. Rey, V. Delgado, J. Zambrano Nájera

Abstract:

Increases in urbanization during XX century, has produced changes in natural dynamics of the basins, which has resulted in increases in runoff volumes, peak flows and flow velocities, that in turn increases flood risk. Higher runoff volumes decrease sewerage networks hydraulic capacity and can cause its failure. This in turn generates increasingly recurrent floods causing mobility problems and general economic detriment in the cities. In Latin America, especially Colombia, this is a major problem because urban population at late XX century was more than 70% is in urban areas increasing approximately in 790% in 1940-1990 period. Besides, high slopes product of Andean topography and high precipitation typical of tropical climates increases velocities and volumes even more, causing stopping of cities during storms. Thus, it becomes very important to know hydrological behavior of Andean Urban Watersheds. This research aims to determine the impact of urbanization in high sloped urban watersheds in its hydrology. To this end, it will be used as study area experimental urban watershed named Palogrande-San Luis watershed, located in the city of Manizales, Colombia. Manizales is a city in central western Colombia, located in Colombian Central Mountain Range (part of Los Andes Mountains) with an abrupt topography (average altitude is 2.153 m). The climate in Manizales is quite uniform, but due to its high altitude it presents high precipitations (1.545 mm/year average) with high humidity (83% average). It was applied HEC-HMS Hydrologic model on the watershed. The inputs to the model were derived from Geographic Information Systems (GIS) theme layers of the Instituto de Estudios Ambientales –IDEA of Universidad Nacional de Colombia, Manizales (Institute of Environmental Studies) and aerial photography taken for the research in conjunction with available literature and look up tables. Rainfall data from a network of 4 rain gages and historical stream flow data were used to calibrate and validate runoff depth using the hydrologic model. Manual calibration was made, and the simulation results show that the model selected is able to characterize the runoff response of the watershed due to land use for urbanization in high mountain watersheds.

Keywords: Andean watersheds modelling, high mountain urban hydrology, urban planning, hydrologic modelling

Procedia PDF Downloads 233
1197 Utilization of Standard Paediatric Observation Chart to Evaluate Infants under Six Months Presenting with Non-Specific Complaints

Authors: Michael Zhang, Nicholas Marriage, Valerie Astle, Marie-Louise Ratican, Jonathan Ash, Haddijatou Hughes

Abstract:

Objective: Young infants are often brought to the Emergency Department (ED) with a variety of complaints, some of them are non-specific and present as a diagnostic challenge to the attending clinician. Whilst invasive investigations such as blood tests and lumbar puncture are necessary in some cases to exclude serious infections, some basic clinical tools in additional to thorough clinical history can be useful to assess the risks of serious conditions in these young infants. This study aimed to examine the utilization of one of clinical tools in this regard. Methods: This retrospective observational study examined the medical records of infants under 6 months presenting to a mixed urban ED between January 2013 and December 2014. The infants deemed to have non-specific complaints or diagnoses by the emergency clinicians were selected for analysis. The ones with clear systemic diagnoses were excluded. Among all relevant clinical information and investigation results, utilization of Standard Paediatric Observation Chart (SPOC) was particularly scrutinized in these medical records. This specific chart was developed by the expert clinicians in local health department. It categorizes important clinical signs into some color-coded zones as a visual cue for serious implication of some abnormalities. An infant is regarded as SPOC positive when fulfills 1 red zone or 2 yellow zones criteria, and the attending clinician would be prompted to investigate and treat for potential serious conditions accordingly. Results: Eight hundred and thirty-five infants met the inclusion criteria for this project. The ones admitted to the hospital for further management were more likely to have SPOC positive criteria than the discharged infants (Odds ratio: 12.26, 95% CI: 8.04 – 18.69). Similarly, Sepsis alert criteria on SPOC were positive in a higher percentage of patients with serious infections (56.52%) in comparison to those with mild conditions (15.89%) (p < 0.001). The SPOC sepsis criteria had a sensitivity of 56.5% (95% CI: 47.0% - 65.7%) and a moderate specificity of 84.1% (95% CI: 80.8% - 87.0%) to identify serious infections. Applying to this infant population, with a 17.4% prevalence of serious infection, the positive predictive value was only 42.8% (95% CI: 36.9% - 49.0%). However, the negative predictive value was high at 90.2% (95% CI: 88.1% - 91.9%). Conclusions: Standard Paediatric Observation Chart has been applied as a useful clinical tool in the clinical practice to help identify and manage young sick infants in ED effectively.

Keywords: clinical tool, infants, non-specific complaints, Standard Paediatric Observation Chart

Procedia PDF Downloads 252
1196 Na Doped ZnO UV Filters with Reduced Photocatalytic Activity for Sunscreen Application

Authors: Rafid Mueen, Konstantin Konstantinov, Micheal Lerch, Zhenxiang Cheng

Abstract:

In the past two decades, the concern for skin protection from ultraviolet (UV) radiation has attracted considerable attention due to the increased intensity of UV rays that can reach the Earth’s surface as a result of the breakdown of ozone layer. Recently, UVA has also attracted attention, since, in comparison to UVB, it can penetrate deeply into the skin, which can result in significant health concerns. Sunscreen agents are one of the significant tools to protect the skin from UV irradiation, and it is either organic or in organic. Developing of inorganic UV blockers is essential, which provide efficient UV protection over a wide spectrum rather than organic filters. Furthermore inorganic UV blockers are good comfort, and high safety when applied on human skin. Inorganic materials can absorb, reflect, or scatter the ultraviolet radiation, depending on their particle size, unlike the organic blockers, which absorb the UV irradiation. Nowadays, most inorganic UV-blocking filters are based on (TiO2) and ZnO). ZnO can provide protection in the UVA range. Indeed, ZnO is attractive for in sunscreen formulization, and this relates to many advantages, such as its modest refractive index (2.0), absorption of a small fraction of solar radiation in the UV range which is equal to or less than 385 nm, its high probable recombination of photogenerated carriers (electrons and holes), large direct band gap, high exciton binding energy, non-risky nature, and high tendency towards chemical and physical stability which make it transparent in the visible region with UV protective activity. A significant issue for ZnO use in sunscreens is that it can generate ROS in the presence of UV light because of its photocatalytic activity. Therefore it is essential to make a non-photocatalytic material through modification by other metals. Several efforts have been made to deactivate the photocatalytic activity of ZnO by using inorganic surface modifiers. The doping of ZnO by different metals is another way to modify its photocatalytic activity. Recently, successful doping of ZnO with different metals such as Ce, La, Co, Mn, Al, Li, Na, K, and Cr by various procedures, such as a simple and facile one pot water bath, co-precipitation, hydrothermal, solvothermal, combustion, and sol gel methods has been reported. These materials exhibit greater performance than undoped ZnO towards increasing the photocatalytic activity of ZnO in visible light. Therefore, metal doping can be an effective technique to modify the ZnO photocatalytic activity. However, in the current work, we successfully reduce the photocatalytic activity of ZnO through Na doped ZnO fabricated via sol-gel and hydrothermal methods.

Keywords: photocatalytic, ROS, UVA, ZnO

Procedia PDF Downloads 144
1195 Integrating Virtual Reality and Building Information Model-Based Quantity Takeoffs for Supporting Construction Management

Authors: Chin-Yu Lin, Kun-Chi Wang, Shih-Hsu Wang, Wei-Chih Wang

Abstract:

A construction superintendent needs to know not only the amount of quantities of cost items or materials completed to develop a daily report or calculate the daily progress (earned value) in each day, but also the amount of quantities of materials (e.g., reinforced steel and concrete) to be ordered (or moved into the jobsite) for performing the in-progress or ready-to-start construction activities (e.g., erection of reinforced steel and concrete pouring). These daily construction management tasks require great effort in extracting accurate quantities in a short time (usually must be completed right before getting off work every day). As a result, most superintendents can only provide these quantity data based on either what they see on the site (high inaccuracy) or the extraction of quantities from two-dimension (2D) construction drawings (high time consumption). Hence, the current practice of providing the amount of quantity data completed in each day needs improvement in terms of more accuracy and efficiency. Recently, a three-dimension (3D)-based building information model (BIM) technique has been widely applied to support construction quantity takeoffs (QTO) process. The capability of virtual reality (VR) allows to view a building from the first person's viewpoint. Thus, this study proposes an innovative system by integrating VR (using 'Unity') and BIM (using 'Revit') to extract quantities to support the above daily construction management tasks. The use of VR allows a system user to be present in a virtual building to more objectively assess the construction progress in the office. This VR- and BIM-based system is also facilitated by an integrated database (consisting of the information and data associated with the BIM model, QTO, and costs). In each day, a superintendent can work through a BIM-based virtual building to quickly identify (via a developed VR shooting function) the building components (or objects) that are in-progress or finished in the jobsite. And he then specifies a percentage (e.g., 20%, 50% or 100%) of completion of each identified building object based on his observation on the jobsite. Next, the system will generate the completed quantities that day by multiplying the specified percentage by the full quantities of the cost items (or materials) associated with the identified object. A building construction project located in northern Taiwan is used as a case study to test the benefits (i.e., accuracy and efficiency) of the proposed system in quantity extraction for supporting the development of daily reports and the orders of construction materials.

Keywords: building information model, construction management, quantity takeoffs, virtual reality

Procedia PDF Downloads 132
1194 Evaluation of Low Temperature as Treatment Tool for Eradication of Mediterranean Fruit Fly (Ceratitis capitata) in Artificial Diet

Authors: Farhan J. M. Al-Behadili, Vineeta Bilgi, Miyuki Taniguchi, Junxi Li, Wei Xu

Abstract:

Mediterranean fruit fly (Ceratitis capitata) is one of the most destructive pests of fruits and vegetables. Medfly originated from Africa and spread in many countries, and is currently an endemic pest in Western Australia. Medfly has been recorded from over 300 plant species including fruits, vegetables, nuts and its main hosts include blueberries, citrus, stone fruit, pome fruits, peppers, tomatoes, and figs. Global trade of fruits and other farm fresh products are suffering from the damages of this pest, which prompted towards the need to develop more effective ways to control these pests. The available quarantine treatment technologies mainly include chemical treatment (e.g., fumigation) and non-chemical treatments (e.g., cold, heat and irradiation). In recent years, with the loss of several chemicals, it has become even more important to rely on non-chemical postharvest control technologies (i.e., heat, cold and irradiation) to control fruit flies. Cold treatment is one of the most potential trends of focus in postharvest treatment because it is free of chemical residues, mitigates or kills the pest population, increases the strength of the fruits, and prolongs storage time. It can also be applied to fruits after packing and ‘in transit’ during lengthy transport by sea during their exports. However, limited systematic study on cold treatment of Medfly stages in artificial diets was reported, which is critical to provide a scientific basis to compare with previous research in plant products and design an effective cold treatment suitable for exported plant products. The overall purpose of this study was to evaluate and understand Medfly responses to cold treatments. Medfly stages were tested. The long-term goal was to optimize current postharvest treatments and develop more environmentally-friendly, cost-effective, and efficient treatments for controlling Medfly. Cold treatment with different exposure times is studied to evaluate cold eradication treatment of Mediterranean fruit fly (Ceratitis capitata), that reared on carrot diet. Mortality is important aspect was studied in this study. On the other hand, study effects of exposure time on mortality means of medfly stages.

Keywords: cold treatment, fruit fly, Ceratitis capitata, carrot diet, temperature effects

Procedia PDF Downloads 224
1193 Empirical Orthogonal Functions Analysis of Hydrophysical Characteristics in the Shira Lake in Southern Siberia

Authors: Olga S. Volodko, Lidiya A. Kompaniets, Ludmila V. Gavrilova

Abstract:

The method of empirical orthogonal functions is the method of data analysis with a complex spatial-temporal structure. This method allows us to decompose the data into a finite number of modes determined by empirically finding the eigenfunctions of data correlation matrix. The modes have different scales and can be associated with various physical processes. The empirical orthogonal function method has been widely used for the analysis of hydrophysical characteristics, for example, the analysis of sea surface temperatures in the Western North Atlantic, ocean surface currents in the North Carolina, the study of tropical wave disturbances etc. The method used in this study has been applied to the analysis of temperature and velocity measurements in saline Lake Shira (Southern Siberia, Russia). Shira is a shallow lake with the maximum depth of 25 m. The lake Shira can be considered as a closed water site because of it has one small river providing inflow and but it has no outflows. The main factor that causes the motion of fluid is variable wind flows. In summer the lake is strongly stratified by temperature and saline. Long-term measurements of the temperatures and currents were conducted at several points during summer 2014-2015. The temperature has been measured with an accuracy of 0.1 ºC. The data were analyzed using the empirical orthogonal function method in the real version. The first empirical eigenmode accounts for 70-80 % of the energy and can be interpreted as temperature distribution with a thermocline. A thermocline is a thermal layer where the temperature decreases rapidly from the mixed upper layer of the lake to much colder deep water. The higher order modes can be interpreted as oscillations induced by internal waves. The currents measurements were recorded using Acoustic Doppler Current Profilers 600 kHz and 1200 kHz. The data were analyzed using the empirical orthogonal function method in the complex version. The first empirical eigenmode accounts for about 40 % of the energy and corresponds to the Ekman spiral occurring in the case of a stationary homogeneous fluid. Other modes describe the effects associated with the stratification of fluids. The second and next empirical eigenmodes were associated with dynamical modes. These modes were obtained for a simplified model of inhomogeneous three-level fluid at a water site with a flat bottom.

Keywords: Ekman spiral, empirical orthogonal functions, data analysis, stratified fluid, thermocline

Procedia PDF Downloads 136
1192 Investigating the Impact of Enterprise Resource Planning System and Supply Chain Operations on Competitive Advantage and Corporate Performance (Case Study: Mamot Company)

Authors: Mohammad Mahdi Mozaffari, Mehdi Ajalli, Delaram Jafargholi

Abstract:

The main purpose of this study is to investigate the impact of the system of ERP (Enterprise Resource Planning) and SCM (Supply Chain Management) on the competitive advantage and performance of Mamot Company. The methods for collecting information in this study are library studies and field research. A questionnaire was used to collect the data needed to determine the relationship between the variables of the research. This questionnaire contains 38 questions. The direction of the current research is applied. The statistical population of this study consists of managers and experts who are familiar with the SCM system and ERP. Number of statistical society is 210. The sampling method is simple in this research. The sample size is 136 people. Also, among the distributed questionnaires, Reliability of the Cronbach's Alpha Cronbach's Questionnaire is evaluated and its value is more than 70%. Therefore, it confirms reliability. And formal validity has been used to determine the validity of the questionnaire, and the validity of the questionnaire is confirmed by the fact that the score of the impact is greater than 1.5. In the present study, one variable analysis was used for central indicators, dispersion and deviation from symmetry, and a general picture of the society was obtained. Also, two variables were analyzed to test the hypotheses; measure the correlation coefficient between variables using structural equations, SPSS software was used. Finally, multivariate analysis was used with statistical techniques related to the SPLS structural equations to determine the effects of independent variables on the dependent variables of the research to determine the structural relationships between the variables. The results of the test of research hypotheses indicate that: 1. Supply chain management practices have a positive impact on the competitive advantage of the Mammoth industrial complex. 2. Supply chain management practices have a positive impact on the performance of the Mammoth industrial complex. 3. Planning system Organizational resources have a positive impact on the performance of the Mammoth industrial complex. 4. The system of enterprise resource planning has a positive impact on Mamot's competitive advantage. 5.The competitive advantage has a positive impact on the performance of the Mammoth industrial complex 6.The system of enterprise resource planning Mamot Industrial Complex Supply Chain Management has a positive impact. The above results indicate that the system of enterprise resource planning and supply chain management has an impact on the competitive advantage and corporate performance of Mamot Company.

Keywords: enterprise resource planning, supply chain management, competitive advantage, Mamot company performance

Procedia PDF Downloads 98
1191 Detection of Aflatoxin B1 Producing Aspergillus flavus Genes from Maize Feed Using Loop-Mediated Isothermal Amplification (LAMP) Technique

Authors: Sontana Mimapan, Phattarawadee Wattanasuntorn, Phanom Saijit

Abstract:

Aflatoxin contamination in maize, one of several agriculture crops grown for livestock feeding, is still a problem throughout the world mainly under hot and humid weather conditions like Thailand. In this study Aspergillus flavus (A. Flavus), the key fungus for aflatoxin production especially aflatoxin B1 (AFB1), isolated from naturally infected maize were identified and characterized according to colony morphology and PCR using ITS, Beta-tubulin and calmodulin genes. The strains were analysed for the presence of four aflatoxigenic biosynthesis genes in relation to their capability to produce AFB1, Ver1, Omt1, Nor1, and aflR. Aflatoxin production was then confirmed using immunoaffinity column technique. A loop-mediated isothermal amplification (LAMP) was applied as an innovative technique for rapid detection of target nucleic acid. The reaction condition was optimized at 65C for 60 min. and calcein flurescent reagent was added before amplification. The LAMP results showed clear differences between positive and negative reactions in end point analysis under daylight and UV light by the naked eye. In daylight, the samples with AFB1 producing A. Flavus genes developed a yellow to green color, but those without the genes retained the orange color. When excited with UV light, the positive samples become visible by bright green fluorescence. LAMP reactions were positive after addition of purified target DNA until dilutions of 10⁻⁶. The reaction products were then confirmed and visualized with 1% agarose gel electrophoresis. In this regards, 50 maize samples were collected from dairy farms and tested for the presence of four aflatoxigenic biosynthesis genes using LAMP technique. The results were positive in 18 samples (36%) but negative in 32 samples (64%). All of the samples were rechecked by PCR and the results were the same as LAMP, indicating 100% specificity. Additionally, when compared with the immunoaffinity column-based aflatoxin analysis, there was a significant correlation between LAMP results and aflatoxin analysis (r= 0.83, P < 0.05) which suggested that positive maize samples were likely to be a high- risk feed. In conclusion, the LAMP developed in this study can provide a simple and rapid approach for detecting AFB1 producing A. Flavus genes from maize and appeared to be a promising tool for the prediction of potential aflatoxigenic risk in livestock feedings.

Keywords: Aflatoxin B1, Aspergillus flavus genes, maize, loop-mediated isothermal amplification

Procedia PDF Downloads 240
1190 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 147
1189 Transdermal Delivery of Sodium Diclofenac from Palm Kernel Oil Esteres Nanoemulsions

Authors: Malahat Rezaee, Mahiran Basri, Abu Bakar Salleh, Raja Noor Zaliha Raja Abdul Rahman

Abstract:

Sodium diclofenac is one of the most commonly used drugs of nonsteroidal anti-inflammatory drugs (NSAIDs). It is especially effective in the controlling the severe conditions of inflammation and pain, musculoskeletal disorders, arthritis, and dysmenorrhea. Formulation as nanoemulsions is one of the nanoscience approaches that has been progressively considered in pharmaceutical science for transdermal delivery of the drug. Nanoemulsions are a type of emulsion with particle sizes ranging from 20 nm to 200 nm. An emulsion is formed by the dispersion of one liquid, usually the oil phase in another immiscible liquid, water phase that is stabilized using the surfactant. Palm kernel oil esters (PKOEs), in comparison to other oils, contain higher amounts of shorter chain esters, which suitable to be applied in micro and nanoemulsion systems as a carrier for actives, with excellent wetting behavior without the oily feeling. This research aimed to study the effect of terpene type and concentration on sodium diclofenac permeation from palm kernel oil esters nanoemulsions and physicochemical properties of the nanoemulsions systems. The effect of various terpenes of geraniol, menthone, menthol, cineol and nerolidol at different concentrations of 0.5, 1.0, 2.0, and 4.0% on permeation of sodium diclofenac were evaluated using Franz diffusion cells and rat skin as permeation membrane. The results of this part demonstrated that all terpenes showed promoting effect on sodium diclofenac penetration. However, menthol and menthone at all concentrations showed significant effects (<0.05) on drug permeation. The most outstanding terpene was menthol with the most significant effect for skin permeability of sodium diclofenac. The effect of terpenes on physicochemical properties of nanoemulsion systems was investigated on the parameters of particle size, zeta potential, pH, viscosity and electrical conductivity. The result showed that all terpenes had the significant effect on particle size and non-significant effects on the zeta potential of the nanoemulsion systems. The effect of terpenes was significant on pH, excluding the menthone at concentrations of 0.5 and 1.0%, and cineol and nerolidol at the concentration of 2.0%. Terpenes also had significant effect on viscosity of nanoemulsions exception of menthone and cineol at the concentration of 0.5%. The result of conductivity measurements showed that all terpenes at all concentration except cineol at the concentration of 0.5% represented significant effect on electrical conductivity.

Keywords: nanoemulsions, palm kernel oil esters, sodium diclofenac, terpenes, skin permeation

Procedia PDF Downloads 421
1188 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 330
1187 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions

Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa

Abstract:

The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.

Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario

Procedia PDF Downloads 116
1186 The Views of German Preparatory Language Programme Students about German Speaking Activity

Authors: Eda Üstünel, Seval Karacabey

Abstract:

The students, who are enrolled in German Preparatory Language Programme at the School of Foreign Languages, Muğla Sıtkı Koçman University, Turkey, learn German as a foreign language for two semesters in an academic year. Although the language programme is a skills-based one, the students lack German speaking skills due to their fear of making language mistakes while speaking in German. This problem of incompetency in German speaking skills exists also in their four-year departmental study at the Faculty of Education. In order to address this problem we design German speaking activities, which are extra-curricular activities. With the help of these activities, we aim to lead Turkish students of German language to speak in the target language, to improve their speaking skills in the target language and to create a stress-free atmosphere and a meaningful learning environment to communicate in the target language. In order to achieve these aims, an ERASMUS+ exchange staff (a German trainee teacher of German as a foreign language), who is from Schwabisch Gmünd University, Germany, conducted out-of-class German speaking activities once a week for three weeks in total. Each speaking activity is lasted for one and a half hour per week. 7 volunteered students of German preparatory language programme attended the speaking activity for three weeks. The activity took place at a cafe in the university campus, that’s the reason, we call it as an out-of-class activity. The content of speaking activity is not related to the topics studied at the units of coursebook, that’s the reason, we call this activity as extra-curricular one. For data collection, three tools are used. A questionnaire, which is an adapted version of Sabo’s questionnaire, is applied to seven volunteers. An interview session is then held with each student on individual basis. The interview questions are developed so as to ask students to expand their answers that are given at the questionnaires. The German trainee teacher wrote fieldnotes, in which the teacher described the activity in the light of her thoughts about what went well and which areas were needed to be improved. The results of questionnaires show that six out of seven students note that such an acitivity must be conducted by a native speaker of German. Four out of seven students emphasize that they like the way that the activities are designed in a learner-centred fashion. All of the students point out that they feel motivated to talk to the trainee teacher in German. Six out of seven students note that the opportunity to communicate in German with the teacher and the peers enable them to improve their speaking skills, the use of grammatical rules and the use of vocabulary.

Keywords: Learning a Foreign Language, Speaking Skills, Teaching German as a Foreign Language, Turkish Learners of German Language

Procedia PDF Downloads 321
1185 Predicting the Effect of Vibro Stone Column Installation on Performance of Reinforced Foundations

Authors: K. Al Ammari, B. G. Clarke

Abstract:

Soil improvement using vibro stone column techniques consists of two main parts: (1) the installed load bearing columns of well-compacted, coarse-grained material and (2) the improvements to the surrounding soil due to vibro compaction. Extensive research work has been carried out over the last 20 years to understand the improvement in the composite foundation performance due to the second part mentioned above. Nevertheless, few of these studies have tried to quantify some of the key design parameters, namely the changes in the stiffness and stress state of the treated soil, or have consider these parameters in the design and calculation process. Consequently, empirical and conservative design methods are still being used by ground improvement companies with a significant variety of results in engineering practice. Two-dimensional finite element study to develop an axisymmetric model of a single stone column reinforced foundation was performed using PLAXIS 2D AE to quantify the effect of the vibro installation of this column in soft saturated clay. Settlement and bearing performance were studied as an essential part of the design and calculation of the stone column foundation. Particular attention was paid to the large deformation in the soft clay around the installed column caused by the lateral expansion. So updated mesh advanced option was taken in the analysis. In this analysis, different degrees of stone column lateral expansions were simulated and numerically analyzed, and then the changes in the stress state, stiffness, settlement performance and bearing capacity were quantified. It was found that application of radial expansion will produce a horizontal stress in the soft clay mass that gradually decrease as the distance from the stone column axis increases. The excess pore pressure due to the undrained conditions starts to dissipate immediately after finishing the column installation, allowing the horizontal stress to relax. Changes in the coefficient of the lateral earth pressure K ٭, which is very important in representing the stress state, and the new stiffness distribution in the reinforced clay mass, were estimated. More encouraging results showed that increasing the expansion during column installation has a noticeable effect on improving the bearing capacity and reducing the settlement of reinforced ground, So, a design method should include this significant effect of the applied lateral displacement during the stone column instillation in simulation and numerical analysis design.

Keywords: bearing capacity, design, installation, numerical analysis, settlement, stone column

Procedia PDF Downloads 374