Search results for: features engineering methods for forecasting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20946

Search results for: features engineering methods for forecasting

19206 Using Audio-Visual Aids and Computer-Assisted Language Instruction to Overcome Learning Difficulties of Vocabulary in Students of Special Needs

Authors: Sadeq Al Yaari, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaar

Abstract:

Objectives: To assess the effect of using audio-visual aids and computer-assisted/ aided language instruction (CALI) in the performance of students of special needs studying vocabulary course. Methods: The performance of forty students of special needs (males and females) who used audiovisual aids and CALI in their vocabulary course at al-Malādh school for students of special needs was compared to that of another group (control group) of the same number and age (8-18). Again, subjects in the experimental group were given lessons using audio-visual aids and CALI, while those in the control group were given lessons using ordinary educational aids only, although both groups almost shared the same features (class environment, speech language therapist (SLT), etc.). Pre-andposttest was given at the beginning and end of the semester and a qualitative and quantitative analysis followed. Results & conclusions: Results of the present experimental study's pre-and-posttests indicated that the performance of the students in the first group was higher than that of those of the second group (34.27%, 73.82% vs. 33.57%, 34.92%, respectively). Compared with females, males’ performance was higher (1515 scores vs. 1438 scores). Such findings suggest that the presence of these audiovisual aids and CALI in the classes of students of special needs, especially if they are studying vocabulary building course is very important due to their usefulness in the improvement of performance of the students of special needs.

Keywords: language components, vocabulary, audio-visual aids, CALI, special needs, students, SLTs

Procedia PDF Downloads 51
19205 Hard Disk Failure Predictions in Supercomputing System Based on CNN-LSTM and Oversampling Technique

Authors: Yingkun Huang, Li Guo, Zekang Lan, Kai Tian

Abstract:

Hard disk drives (HDD) failure of the exascale supercomputing system may lead to service interruption and invalidate previous calculations, and it will cause permanent data loss. Therefore, initiating corrective actions before hard drive failures materialize is critical to the continued operation of jobs. In this paper, a highly accurate analysis model based on CNN-LSTM and oversampling technique was proposed, which can correctly predict the necessity of a disk replacement even ten days in advance. Generally, the learning-based method performs poorly on a training dataset with long-tail distribution, especially fault prediction is a very classic situation as the scarcity of failure data. To overcome the puzzle, a new oversampling was employed to augment the data, and then, an improved CNN-LSTM with the shortcut was built to learn more effective features. The shortcut transmits the results of the previous layer of CNN and is used as the input of the LSTM model after weighted fusion with the output of the next layer. Finally, a detailed, empirical comparison of 6 prediction methods is presented and discussed on a public dataset for evaluation. The experiments indicate that the proposed method predicts disk failure with 0.91 Precision, 0.91 Recall, 0.91 F-measure, and 0.90 MCC for 10 days prediction horizon. Thus, the proposed algorithm is an efficient algorithm for predicting HDD failure in supercomputing.

Keywords: HDD replacement, failure, CNN-LSTM, oversampling, prediction

Procedia PDF Downloads 80
19204 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 242
19203 Characterization of Surface Microstructures on Bio-Based PLA Fabricated with Nano-Imprint Lithography

Authors: D. Bikiaris, M. Nerantzaki, I. Koliakou, A. Francone, N. Kehagias

Abstract:

In the present study, the formation of structures in poly(lactic acid) (PLA) has been investigated with respect to producing areas of regular, superficial features with dimensions comparable to those of cells or biological macromolecules. Nanoimprint lithography, a method of pattern replication in polymers, has been used for the production of features ranging from tens of micrometers, covering areas up to 1 cm², down to hundreds of nanometers. Both micro- and nano-structures were faithfully replicated. Potentially, PLA has wide uses within biomedical fields, from implantable medical devices, including screws and pins, to membrane applications, such as wound covers, and even as an injectable polymer for, for example, lipoatrophy. The possibility of fabricating structured PLA surfaces, with structures of the dimensions associated with cells or biological macro- molecules, is of interest in fields such as cellular engineering. Imprint-based technologies have demonstrated the ability to selectively imprint polymer films over large areas resulting in 3D imprints over flat, curved or pre-patterned surfaces. Here, we compare nano-patterned with nano-patterned by nanoimprint lithography (NIL) PLA film. A silicon nanostructured stamp (provided by Nanotypos company) having positive and negative protrusions was used to pattern PLA films by means of thermal NIL. The polymer film was heated from 40°C to 60°C above its Tg and embossed with a pressure of 60 bars for 3 min. The stamp and substrate were demolded at room temperature. Scanning electron microscope (SEM) images showed good replication fidelity of the replicated Si stamp. Contact-angle measurements suggested that positive microstructuring of the polymer (where features protrude from the polymer surface) produced a more hydrophilic surface than negative micro-structuring. The ability to structure the surface of the poly(lactic acid), allied to the polymer’s post-processing transparency and proven biocompatibility. Films produced in this were also shown to enhance the aligned attachment behavior and proliferation of Wharton’s Jelly Mesenchymal Stem cells, leading to the observed growth contact guidance. The bacterial attachment patterns of some bacteria, highlighted that the nano-patterned PLA structure can reduce the propensity for the bacteria to attach to the surface, with a greater bactericidal being demonstrated activity against the Staphylococcus aureus cells. These biocompatible, micro- and nanopatterned PLA surfaces could be useful for polymer– cell interaction experiments at dimensions at, or below, that of individual cells. Indeed, post-fabrication modification of the microstructured PLA surface, with materials such as collagen (which can further reduce the hydrophobicity of the surface), will extend the range of applications, possibly through the use of PLA’s inherent biodegradability. Further study is being undertaken to examine whether these structures promote cell growth on the polymer surface.

Keywords: poly(lactic acid), nano-imprint lithography, anti-bacterial properties, PLA

Procedia PDF Downloads 330
19202 A Multi-Release Software Reliability Growth Models Incorporating Imperfect Debugging and Change-Point under the Simulated Testing Environment and Software Release Time

Authors: Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar

Abstract:

The testing process of the software during the software development time is a crucial step as it makes the software more efficient and dependable. To estimate software’s reliability through the mean value function, many software reliability growth models (SRGMs) were developed under the assumption that operating and testing environments are the same. Practically, it is not true because when the software works in a natural field environment, the reliability of the software differs. This article discussed an SRGM comprising change-point and imperfect debugging in a simulated testing environment. Later on, we extended it in a multi-release direction. Initially, the software was released to the market with few features. According to the market’s demand, the software company upgraded the current version by adding new features as time passed. Therefore, we have proposed a generalized multi-release SRGM where change-point and imperfect debugging concepts have been addressed in a simulated testing environment. The failure-increasing rate concept has been adopted to determine the change point for each software release. Based on nine goodness-of-fit criteria, the proposed model is validated on two real datasets. The results demonstrate that the proposed model fits the datasets better. We have also discussed the optimal release time of the software through a cost model by assuming that the testing and debugging costs are time-dependent.

Keywords: software reliability growth models, non-homogeneous Poisson process, multi-release software, mean value function, change-point, environmental factors

Procedia PDF Downloads 74
19201 Daunting or Desirable? Examining the Perception of Mindfulness and Current Mindful Practices of Predominantly Christian University Students

Authors: Elizabeth Valenti

Abstract:

Objective: To date, there remains an absence of literature examining perceptions of mindfulness and mindful practices among college students, particularly among Christian students. The purpose of this mixed-methods, exploratory study was to gain a better understanding of students’ perception of mindfulness and assess current mindful practices. Methods: The mixed-methods, exploratory study examined data from freshmen undergraduate college students (N=107) enrolled in an introductory psychology course at a private, non-profit Christian university. Students completed a researcher-developed questionnaire containing both Likert and opened ended questions to assess knowledge about and perceptions of mindfulness, as well as current mindful practices. Results: Results of the thematic analysis revealed approximately half of the students had a limited understanding of mindfulness, with several reporting disadvantages. Most students listed prayer as a consistent practice, with a much smaller percentage of students consistently engaging in other mindful activities. Discussion: Implications for mindfulness education and the promotion of evidence-based methods, particularly in Christian communities, are discussed.

Keywords: mindfulness, mindful practices, perception, Christian, university students, mental health

Procedia PDF Downloads 128
19200 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 125
19199 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving

Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco

Abstract:

Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.

Keywords: augmented reality, driving, physiological signals, test platform

Procedia PDF Downloads 142
19198 An Approach for Association Rules Ranking

Authors: Rihab Idoudi, Karim Saheb Ettabaa, Basel Solaiman, Kamel Hamrouni

Abstract:

Medical association rules induction is used to discover useful correlations between pertinent concepts from large medical databases. Nevertheless, ARs algorithms produce huge amount of delivered rules and do not guarantee the usefulness and interestingness of the generated knowledge. To overcome this drawback, we propose an ontology based interestingness measure for ARs ranking. According to domain expert, the goal of the use of ARs is to discover implicit relationships between items of different categories such as ‘clinical features and disorders’, ‘clinical features and radiological observations’, etc. That’s to say, the itemsets which are composed of ‘similar’ items are uninteresting. Therefore, the dissimilarity between the rule’s items can be used to judge the interestingness of association rules; the more different are the items, the more interesting the rule is. In this paper, we design a distinct approach for ranking semantically interesting association rules involving the use of an ontology knowledge mining approach. The basic idea is to organize the ontology’s concepts into a hierarchical structure of conceptual clusters of targeted subjects, where each cluster encapsulates ‘similar’ concepts suggesting a specific category of the domain knowledge. The interestingness of association rules is, then, defined as the dissimilarity between corresponding clusters. That is to say, the further are the clusters of the items in the AR, the more interesting the rule is. We apply the method in our domain of interest – mammographic domain- using an existing mammographic ontology called Mammo with the goal of deriving interesting rules from past experiences, to discover implicit relationships between concepts modeling the domain.

Keywords: association rule, conceptual clusters, interestingness measures, ontology knowledge mining, ranking

Procedia PDF Downloads 322
19197 Characterization of Fe Doped ZnO Synthesised by Sol-Gel and Combustion Routes

Authors: M. Ravindiran, P. Shankar

Abstract:

This paper deals with the comparison of two synthesis methods, namely, sol-gel, and combustion to prepare Fe doped ZnO nano material. Characterization results for structural, optical and magnetic properties were analyzed for the sol gel and combustion synthesis derived materials. Magnetic studies of the prepared compounds reveal that the combustion synthesis derived material has good magnetization of 50 emu/gm with a better hysteresis loop curve.

Keywords: DMS, combustion, ferromagnetic, synthesis methods

Procedia PDF Downloads 426
19196 Combination of Geological, Geophysical and Reservoir Engineering Analyses in Field Development: A Case Study

Authors: Atif Zafar, Fan Haijun

Abstract:

A sequence of different Reservoir Engineering methods and tools in reservoir characterization and field development are presented in this paper. The real data of Jin Gas Field of L-Basin of Pakistan is used. The basic concept behind this work is to enlighten the importance of well test analysis in a broader way (i.e. reservoir characterization and field development) unlike to just determine the permeability and skin parameters. Normally in the case of reservoir characterization we rely on well test analysis to some extent but for field development plan, the well test analysis has become a forgotten tool specifically for locations of new development wells. This paper describes the successful implementation of well test analysis in Jin Gas Field where the main uncertainties are identified during initial stage of field development when location of new development well was marked only on the basis of G&G (Geologic and Geophysical) data. The seismic interpretation could not encounter one of the boundary (fault, sub-seismic fault, heterogeneity) near the main and only producing well of Jin Gas Field whereas the results of the model from the well test analysis played a very crucial rule in order to propose the location of second well of the newly discovered field. The results from different methods of well test analysis of Jin Gas Field are also integrated with and supported by other tools of Reservoir Engineering i.e. Material Balance Method and Volumetric Method. In this way, a comprehensive way out and algorithm is obtained in order to integrate the well test analyses with Geological and Geophysical analyses for reservoir characterization and field development. On the strong basis of this working and algorithm, it was successfully evaluated that the proposed location of new development well was not justified and it must be somewhere else except South direction.

Keywords: field development plan, reservoir characterization, reservoir engineering, well test analysis

Procedia PDF Downloads 364
19195 Comparison between Some of Robust Regression Methods with OLS Method with Application

Authors: Sizar Abed Mohammed, Zahraa Ghazi Sadeeq

Abstract:

The use of the classic method, least squares (OLS) to estimate the linear regression parameters, when they are available assumptions, and capabilities that have good characteristics, such as impartiality, minimum variance, consistency, and so on. The development of alternative statistical techniques to estimate the parameters, when the data are contaminated with outliers. These are powerful methods (or resistance). In this paper, three of robust methods are studied, which are: Maximum likelihood type estimate M-estimator, Modified Maximum likelihood type estimate MM-estimator and Least Trimmed Squares LTS-estimator, and their results are compared with OLS method. These methods applied to real data taken from Duhok company for manufacturing furniture, the obtained results compared by using the criteria: Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Sum of Absolute Error (MSAE). Important conclusions that this study came up with are: a number of typical values detected by using four methods in the furniture line and very close to the data. This refers to the fact that close to the normal distribution of standard errors, but typical values in the doors line data, using OLS less than that detected by the powerful ways. This means that the standard errors of the distribution are far from normal departure. Another important conclusion is that the estimated values of the parameters by using the lifeline is very far from the estimated values using powerful methods for line doors, gave LTS- destined better results using standard MSE, and gave the M- estimator better results using standard MAPE. Moreover, we noticed that using standard MSAE, and MM- estimator is better. The programs S-plus (version 8.0, professional 2007), Minitab (version 13.2) and SPSS (version 17) are used to analyze the data.

Keywords: Robest, LTS, M estimate, MSE

Procedia PDF Downloads 232
19194 Clinical Features of Acute Aortic Dissection Patients Initially Diagnosed with ST-Segment Elevation Myocardial Infarction

Authors: Min Jee Lee, Young Sun Park, Shin Ahn, Chang Hwan Sohn, Dong Woo Seo, Jae Ho Lee, Yoon Seon Lee, Kyung Soo Lim, Won Young Kim

Abstract:

Background: Acute myocardial infarction (AMI) concomitant with acute aortic syndrome (AAS) is rare but prompt recognition of concomitant AAS is crucial, especially in patients with ST-segment elevation myocardial infarction (STEMI) because misdiagnosis with early thrombolytic or anticoagulant treatment may result in catastrophic consequences. Objectives: This study investigated the clinical features of patients of STEMI concomitant with AAS that may lead to the diagnostic clue. Method: Between 1 January 2010 and 31 December 2014, 22 patients who were the initial diagnosis of acute coronary syndrome (AMI and unstable angina) and AAS (aortic dissection, intramural hematoma and ruptured thoracic aneurysm) in our emergency department were reviewed. Among these, we excluded 10 patients who were transferred from other hospital and 4 patients with non-STEMI, leaving a total of 8 patients of STEMI concomitant with AAS for analysis. Result: The mean age of study patients was 57.5±16.31 years and five patients were Standford type A and three patients were type B aortic dissection. Six patients had ST-segment elevation in anterior leads and two patients had in inferior leads. Most of the patients had acute onset, severe chest pain but no patients had dissecting nature chest pain. Serum troponin I was elevated in three patients but all patients had D-dimer elevation. Aortic regurgitation or regional wall motion abnormality was founded in four patients. However, widened mediastinum was seen in all study patients. Conclusion: When patients with STEMI have elevated D-dimer and widened mediastinum, concomitant AAS may have to be suspected.

Keywords: aortic dissection, myocardial infarction, ST-segment, d-dimer

Procedia PDF Downloads 398
19193 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 502
19192 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon

Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn

Abstract:

The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.

Keywords: land use and land cover change, change detection, image processing, support vector machines

Procedia PDF Downloads 139
19191 Effect of Graded Levels of Detoxified Jatropha cursas on the Performance Characteristics of Cockerel Birds

Authors: W. S. Lawal, T. Akande

Abstract:

Abstract— Four (4) difference methods were employed to detoxify Jatropha carcas, they were physical method (it include soaking and sun drying) Chemical (the use of methylated sprit, hexane and methane). Biological (the use of Aspergillus niger and then sundry for 7days and then Bacillus lichiformis) and Combined method (the combination of chemical and biological methods). Phobol esther analysis was carried out after the detoxification methods and it was found that the combined method is better off (P<0.05). Detoxified Jatropha from each of this methods was sundry and grinded for easy inclusion into poultry feed, detoxified jatropha was included at 0%, 0.5%, 1%, 2%, 3%, 4%, and 5% but the combined method was increased up to 7% because the birds were able to tolerate it, the 0% was the control experiment. 405 day old broiler chicks was used to test the effect of detoxified Jatropha carcas on their performance, there are 5birds per treatment and there are 3 replicates, the experiment lasted for 8weeks,highest number of mortality was obtained in physical method, birds in chemical method tolerated up to 3% Jatropha carcas, biological method is better, as birds there were comfortable at 5% but the best of them is combined method the birds did very well at 7% as there were less mortality and highest weight gain was achieved here (P<0.05) and it was recommended.

Keywords: phobol esther, inclusion level, tolerance level, Jatropha carcas

Procedia PDF Downloads 404
19190 Using Infrared Thermography, Photogrammetry and a Remotely Piloted Aircraft System to Create 3D Thermal Models

Authors: C. C. Kruger, P. Van Tonder

Abstract:

Concrete deteriorates over time and the deterioration can be escalated due to multiple factors. When deteriorations are beneath the concrete’s surface, they could be unknown, even more so when they are located at high elevations. Establishing the severity of such defects could prove difficult and therefore the need to find efficient, safe and economical methods to find these defects becomes ever more important. Current methods using thermography to find defects require equipment such as scaffolding to reach these higher elevations. This could become time- consuming and costly. The risks involved with personnel scaffold or abseil to such heights are high. Accordingly, by combining the technologies of a thermal camera and a Remotely Piloted Aerial System it could be used to find better diagnostic methods. The data could then be constructed into a 3D thermal model to easy representation of the results

Keywords: concrete, infrared thermography, 3D thermal models, diagnostic

Procedia PDF Downloads 173
19189 Empirical Investigation of Bullwhip Effect with Sensitivity Analysis in Supply Chain

Authors: Shoaib Yousaf

Abstract:

The main purpose of this research is to the empirical investigation of the bullwhip effect under sensitivity analysis in the two-tier supply chain. The simulation modeling technique has been applied in this research as a research methodology to see the sensitivity analysis of the bullwhip effect in the rice industry of Pakistan. The research comprises two case studies that have been chosen as a sample. The results of this research have confirmed that reduction in production delay reduces the bullwhip effect, which conforms to the time compressing paradigm and the significance of the reduction in production delay to lessen demand amplification. The result of this research also indicates that by increasing the value of time to adjust inventory decreases the bullwhip effect. Furthermore, by decreasing the value of alpha increases the damping effect of the exponential smoother, it is not surprising that it also reduces the bullwhip effect. Moreover, by reducing the value of time to work in progress also reduces the bullwhip effect. This research will help practitioners and operation managers to reduces the major costs of their products in three ways. They can reduce their i) inventory levels, ii) better utilize their capacity and iii) improve their forecasting techniques. However, this study is based on two tier supply chain, while in reality the supply chain has got many tiers. Hence, future work will be extended across more than two-tier supply chains.

Keywords: bullwhip effect, rice industry, supply chain dynamics, simulation, sensitivity analysis

Procedia PDF Downloads 144
19188 Monitoring Air Pollution Effects on Children for Supporting Public Health Policy: Preliminary Results of MAPEC_LIFE Project

Authors: Elisabetta Ceretti, Silvia Bonizzoni, Alberto Bonetti, Milena Villarini, Marco Verani, Maria Antonella De Donno, Sara Bonetta, Umberto Gelatti

Abstract:

Introduction: Air pollution is a global problem. In 2013, the International Agency for Research on Cancer (IARC) classified air pollution and particulate matter as carcinogenic to human. The study of the health effects of air pollution in children is very important because they are a high-risk group in terms of the health effects of air pollution and early exposure during childhood can increase the risk of developing chronic diseases in adulthood. The MAPEC_LIFE (Monitoring Air Pollution Effects on Children for supporting public health policy) is a project founded by EU Life+ Programme which intends to evaluate the associations between air pollution and early biological effects in children and to propose a model for estimating the global risk of early biological effects due to air pollutants and other factors in children. Methods: The study was carried out on 6-8-year-old children living in five Italian towns in two different seasons. Two biomarkers of early biological effects, primary DNA damage detected with the comet assay and frequency of micronuclei, were investigated in buccal cells of children. Details of children diseases, socio-economic status, exposures to other pollutants and life-style were collected using a questionnaire administered to children’s parents. Child exposure to urban air pollution was assessed by analysing PM0.5 samples collected in the school areas for PAHs and nitro-PAHs concentration, lung toxicity and in vitro genotoxicity on bacterial and human cells. Data on the chemical features of the urban air during the study period were obtained from the Regional Agency for Environmental Protection. The project created also the opportunity to approach the issue of air pollution with the children, trying to raise their awareness on air quality, its health effects and some healthy behaviors by means of an educational intervention in the schools. Results: 1315 children were recruited for the study and participate in the first sampling campaign in the five towns. The second campaign, on the same children, is still ongoing. The preliminary results of the tests on buccal mucosa cells of children will be presented during the conference as well as the preliminary data about the chemical composition and the toxicity and genotoxicity features of PM0.5 samples. The educational package was tested on 250 children of the primary school and showed to be very useful, improving children knowledge about air pollution and its effects and stimulating their interest. Conclusions: The associations between levels of air pollutants, air mutagenicity and biomarkers of early effects will be investigated. A tentative model to calculate the global absolute risk of having early biological effects for air pollution and other variables together will be proposed and may be useful to support policy-making and community interventions to protect children from possible health effects of air pollutants.

Keywords: air pollution exposure, biomarkers of early effects, children, public health policy

Procedia PDF Downloads 330
19187 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review

Authors: Anicet Dansou

Abstract:

Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.

Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete

Procedia PDF Downloads 108
19186 The Analysis of Acute Pancreatitis Patients in a University Hospital

Authors: Adnan Sahin, Ufuk Uylas, Ercument Pasaoglu, Tarik Caga, Enver Ihtiyar, Serdar Erkasap, Ersin Ates, Fatih Yasar

Abstract:

Background: In this study, it was evaluated the demographic features, etiological factors and the management of acute pancreatitis. Methods: 106 patient hospitalized due to acute pancreatitis were retrospectively examined from 1 January 2015 to 31 December 2015 in Department of General Surgery of ESOGUMF. The data of gender, signs and symptoms, etiological factors, WBC, AST, ALT, Amilase, USG and CT findings treatment options ERCP, and complications, mortality rate were analysed. Results: The mean age of patients were 58.8 (53 men and 53 women). The causes of acute pancreatitis were as follows: gallbladder stone was 89, hyperlipidemia was 5 and idiopathic were 16 patients. Severe pancreatitis was developed in 16 patients in the biliary pancreatitis group and ERCP was performed. Cholecystectomy was performed to all biliary pancreatitis group patients after acute pancreatitis subside. The mean hospital stay period was 9.33 (2-37) day. Discussion and conclusion: Severe acute pancreatitis is a mortal disease. The most common etiological cause of acute pancreatitis is biliary origin. The first line treatment modality of acute pancreatitis is medical. Cholecystectomy should be planned to the all-biliary caused acute pancreatitis patients after the attack subside. ERCP is a useful treatment modality in the case of clinical worsening and suspicion of acute cholangitis. ERCP procedure used 16 patients in our series and these patients have a good morbidity and mean hospital period is lower than the others. We suppose that ERCP procedure should be planned selectively and conservatively.

Keywords: acute pancreatitis, ERCP, morbidity, treatment

Procedia PDF Downloads 346
19185 Teaching English to Engineers: Between English Language Teaching and Psychology

Authors: Irina-Ana Drobot

Abstract:

Teaching English to Engineers is part of English for Specific Purposes, a domain which is under the attention of English students especially under the current conditions of finding jobs and establishing partnerships outside Romania. The paper will analyse the existing textbooks together with the teaching strategies they adopt. Teaching English to Engineering students can intersect with domains such as psychology and cultural studies in order to teach them efficiently. Textbooks for students of ESP, ranging from those at the Faculty of Economics to those at the Faculty of Engineers, have shifted away from using specialized vocabulary, drills for grammar and reading comprehension questions and toward communicative methods and the practical use of language. At present, in Romania, grammar is neglected in favour of communicative methods. The current interest in translation studies may indicate a return to this type of method, since only translation specialists can distinguish among specialized terms and determine which are most suitable in a translation. Engineers are currently encouraged to learn English in order to do their own translations in their own field. This paper will analyse the issue of the extent to which it is useful to teach Engineering students to do translations in their field using cognitive psychology applied to language teaching, including issues such as motivation and social psychology. Teaching general English to engineering students can result in lack of interest, but they can be motivated by practical aspects which will help them in their field. This is why this paper needs to take into account an interdisciplinary approach to teaching English to Engineers.

Keywords: cognition, ESP, motivation, psychology

Procedia PDF Downloads 264
19184 Stressful Events and Serious Mood Disorders

Authors: Horesh Reinman Netta

Abstract:

Objectives: To examine the relationship between stressful life events and recurrent major depressive disorders Methods: Three groups of 50 subjects were assessed. One group had a recurrent major depressive disorder with melancholic features; the second group met the criteria for borderline personality disorder, and the third consisted of healthy controls. The Structured Clinical Interview for AXIS I DSM-IV Disorders sand the Structured Clinical Interview for AXIS II DSM-IV Disorders were used for diagnosis. The Israel Psychiatric Epidemiology Research Interview (IPERI) Life Event Scale and the Coddington Life Events Schedule (CLES) were used to measure life events which were confirmed with a confirmatory semi-structured interview. The Beck Depression Inventory and the Satisfaction from Life scales were also administered. Results : The total number of loss-related events in childhood and in the year preceding the first episode was significantly higher in the affective disorder group than in the two control groups. Total number of LE, uncontrolled and independent events were also more common in the depressed patients in the year preceding the first episode. No category of SLE was differentiated among any of the three groups during any period of time following the first depressive episode. Conclusions: SLE play an important role in the onset of affective disorders. There appear to be specific kinds of SLE occurring in childhood and in the year preceding a first episode that have particular significance. SLE may have a lesser role in the maintenance of this illness.

Keywords: modd dosorders, recurrent depression, stress, life events

Procedia PDF Downloads 108
19183 Analysis of Spatial Form and Gene of Historical and Cultural Settlements in Mountainous Areas: Illustrated by the Example of Anju Ancient Town

Authors: Sun Gang

Abstract:

A variety of functional spaces are distributed on the vast mountain waterfront. Their functional positioning presents a spontaneous form of settlement space, and the construction features show a passive impact on the natural environment. As the precious heritage of inheriting human civilization and promoting historical culture, the traditional settlement space in mountainous areas is also the local expression of landscape pattern pattern gene. Under the impact of rapid urban construction and the stimulation of the transformation of social consumption demand, the original texture, scale and ecology of the traditional mountain settlement space, especially the historical and cultural settlement space, have been affected, and the decline of characteristics hinders the development. This paper selects Anju Ancient Town, the fourth largest ancient city in China, which is located in the city of mountains and waters as the research object, and combines spatial analysis and other methods to study the characteristics and causes of its spatial morphology, analyze the internal logic in its formation and development process, build a genetic analysis map, explore the possibility of settlement inheritance and development, and provide reference for the construction, protection and inheritance of traditional mountain settlements.

Keywords: mountain traditional settlement, historical and cultural settlement space, spatial form, spatial gene

Procedia PDF Downloads 90
19182 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing

Procedia PDF Downloads 188
19181 Site Analysis’ Importance as a Valid Factor in Building Design

Authors: Mekwa Eme, Anya chukwuma

Abstract:

The act of evaluating a particular site physically and socially in order to create a good design solution that will address the physical and interior environment of the location is known as architectural site analysis. This essay will describe site analysis as a useful design component. According to the introduction and supporting research, site evaluation and analysis are crucial to good design in terms of topography, orientation, site size, accessibility, rainfall, wind direction, and times of sunrise and sunset. Methodology: Both quantitative and qualitative analyses are used in this paper. The primary and secondary types of data collection are as follows. This information was gathered via the case study approach, already published literature, journals, the internet, a local poll, oral interviews, inquiries, and in-person interviews. The purpose of this is to clarify the benefits of site analysis for the design process and its implications for the working or building stage. Results: Each site's criteria are unique in terms of things like soil, plants, trees, accessibility, topography, and security. This will make it easier for the architect and environmentalist to decide on the idea, shape, and supporting structures of the design. It is crucial because before any design work is done, the nature of the target location will be determined through site visits and research. The location, contours, site features, and accessibility are just a few of the topics included in this site study. In order for students and working architects to understand the nature of the site they will be working on, site analysis is a key component of architectural education. The building's orientation, the site's circulation, and the sustainability of the site may all be determined with thorough research of the site's features.

Keywords: analysis, climate, statistics, design

Procedia PDF Downloads 249
19180 Compressive Strength Evaluation of Underwater Concrete Structures Integrating the Combination of Rebound Hardness and Ultrasonic Pulse Velocity Methods with Artificial Neural Networks

Authors: Seunghee Park, Junkyeong Kim, Eun-Seok Shin, Sang-Hun Han

Abstract:

In this study, two kinds of nondestructive evaluation (NDE) techniques (rebound hardness and ultrasonic pulse velocity methods) are investigated for the effective maintenance of underwater concrete structures. A new methodology to estimate the underwater concrete strengths more effectively, named “artificial neural network (ANN) – based concrete strength estimation with the combination of rebound hardness and ultrasonic pulse velocity methods” is proposed and verified throughout a series of experimental works.

Keywords: underwater concrete, rebound hardness, Schmidt hammer, ultrasonic pulse velocity, ultrasonic sensor, artificial neural networks, ANN

Procedia PDF Downloads 532
19179 Seismic Hazard Prediction Using Seismic Bumps: Artificial Neural Network Technique

Authors: Belkacem Selma, Boumediene Selma, Tourkia Guerzou, Abbes Labdelli

Abstract:

Natural disasters have occurred and will continue to cause human and material damage. Therefore, the idea of "preventing" natural disasters will never be possible. However, their prediction is possible with the advancement of technology. Even if natural disasters are effectively inevitable, their consequences may be partly controlled. The rapid growth and progress of artificial intelligence (AI) had a major impact on the prediction of natural disasters and risk assessment which are necessary for effective disaster reduction. The Earthquakes prediction to prevent the loss of human lives and even property damage is an important factor; that is why it is crucial to develop techniques for predicting this natural disaster. This present study aims to analyze the ability of artificial neural networks (ANNs) to predict earthquakes that occur in a given area. The used data describe the problem of high energy (higher than 10^4J) seismic bumps forecasting in a coal mine using two long walls as an example. For this purpose, seismic bumps data obtained from mines has been analyzed. The results obtained show that the ANN with high accuracy was able to predict earthquake parameters; the classification accuracy through neural networks is more than 94%, and that the models developed are efficient and robust and depend only weakly on the initial database.

Keywords: earthquake prediction, ANN, seismic bumps

Procedia PDF Downloads 127
19178 A Study of Binding Methods and Techniques in Safavid Era Emphasizing on Iran Shahnamehs (16-18th Century AD/10-12th Century AH)

Authors: Ashrafosadat Mousavi Laer, Elaheh Moravej

Abstract:

The art of binding was simple and elementary at the beginning of Islam. This art thrived gradually and continued its development as an independent art. Identification of the binding techniques and used materials in covers and investigation of the arrays give us indexes for the better identification of different doctrines and methods of that time. The catalogers of the manuscripts usually pay attention to four items: gender, color, art elegances, injury, and exquisiteness of the cover. The criterion for classification of the covers is their art nature and gender. 15th century AD (9th century AH) was the period of the binding art development in which the most beautiful covers were produced by the so-called method of ‘burning’. At 16th century AD (10th century AH), in Safavid era, art changed completely and a fundamental evolution occurred in the technique and method of binding. The greatest change in this art was the extensive use of stamp that was made mostly of steel and copper. Theses stamps were presses against leather. These covers were called ‘beat’. In this paper, writing and bookbinding of about 32 Shahnamehs of Safavid era available in the Iranian libraries and museums are studied. An analytical-statistical study shows that four methods have been used including beat, burning, mosaic, and oily. 69 percent of the covers of these copies are cardboards with a leathery coating (goatskin) and have been produced by burning and beat methods. Its reasons are that these two methods have been common methods in Safavid era and performing them was only feasible on leather and the most desirable and commonly used leather of that time was goatskin which was the best option for cover legend durability and preserving the book and it was more durable because it had been made of goat skin. In addition, it had prepared a suitable opportunity for the binding artist’s creativity and innovation.

Keywords: Shahnameh, Safavid era, bookbinding, beat cover, burning cover

Procedia PDF Downloads 238
19177 3D Steady and Transient Centrifugal Pump Flow within Ansys CFX and OpenFOAM

Authors: Clement Leroy, Guillaume Boitel

Abstract:

This paper presents a comparative benchmarking review of a steady and transient three-dimensional (3D) flow computations in centrifugal pump using commercial (AnsysCFX) and open source (OpenFOAM) computational fluid dynamics (CFD) software. In centrifugal rotor-dynamic pump, the fluid enters in the impeller along to the rotating axis to be accelerated in order to increase the pressure, flowing radially outward into another stage, vaned diffuser or volute casing, from where it finally exits into a downstream pipe. Simulations are carried out at the best efficiency point (BEP) and part load, for single-phase flow with several turbulence models. The results are compared with overall performance report from experimental data. The use of CFD technology in industry is still limited by the high computational costs, and even more by the high cost of commercial CFD software and high-performance computing (HPC) licenses. The main objectives of the present study are to define OpenFOAM methodology for high-quality 3D steady and transient turbomachinery CFD simulation to conduct a thorough time-accurate performance analysis. On the other hand a detailed comparisons between computational methods, features on latest Ansys release 18 and OpenFOAM is investigated to assess the accuracy and industrial applications of those solvers. Finally an automated connected workflow (IoT) for turbine blade applications is presented.

Keywords: benchmarking, CFX, internet of things, openFOAM, time-accurate, turbomachinery

Procedia PDF Downloads 205