Search results for: post-editing machine translation output
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5194

Search results for: post-editing machine translation output

1054 A Pipeline for Detecting Copy Number Variation from Whole Exome Sequencing Using Comprehensive Tools

Authors: Cheng-Yang Lee, Petrus Tang, Tzu-Hao Chang

Abstract:

Copy number variations (CNVs) have played an important role in many kinds of human diseases, such as Autism, Schizophrenia and a number of cancers. Many diseases are found in genome coding regions and whole exome sequencing (WES) is a cost-effective and powerful technology in detecting variants that are enriched in exons and have potential applications in clinical setting. Although several algorithms have been developed to detect CNVs using WES and compared with other algorithms for finding the most suitable methods using their own samples, there were not consistent datasets across most of algorithms to evaluate the ability of CNV detection. On the other hand, most of algorithms is using command line interface that may greatly limit the analysis capability of many laboratories. We create a series of simulated WES datasets from UCSC hg19 chromosome 22, and then evaluate the CNV detective ability of 19 algorithms from OMICtools database using our simulated WES datasets. We compute the sensitivity, specificity and accuracy in each algorithm for validation of the exome-derived CNVs. After comparison of 19 algorithms from OMICtools database, we construct a platform to install all of the algorithms in a virtual machine like VirtualBox which can be established conveniently in local computers, and then create a simple script that can be easily to use for detecting CNVs using algorithms selected by users. We also build a table to elaborate on many kinds of events, such as input requirement, CNV detective ability, for all of the algorithms that can provide users a specification to choose optimum algorithms.

Keywords: whole exome sequencing, copy number variations, omictools, pipeline

Procedia PDF Downloads 316
1053 Flow: A Fourth Musical Element

Authors: James R. Wilson

Abstract:

Music is typically defined as having the attributes of melody, harmony, and rhythm. In this paper, a fourth element is proposed -"flow". "Flow" is a new dimension in music that has always been present but only recently identified and measured. The Adagio "Flow Machine" enables us to envision this component and even suggests a new approach to music theory and analysis. The Adagio was created specifically to measure the underlying “flow” in music. The Adagio is an entirely new way to experience and visualize the music, to assist in performing music (both as a conductor and/or performer), and to provide a whole new methodology for music analysis and theory. The Adagio utilizes musical “hit points”, such as a transition from one musical section to another (for example, in a musical composition utilizing the sonata form, a transition from the exposition to the development section) to help define the compositions flow rate. Once the flow rate is established, the Adagio can be used to determine if the composer/performer/conductor has correctly maintained the proper rate of flow throughout the performance. An example is provided using Mozart’s Piano Concerto Number 21. Working with the Adagio yielded an unexpected windfall; it was determined via an empirical study conducted at Nova University’s Biofeedback Lab that watching the Adagio helped volunteers participating in a controlled experiment recover from stressors significantly faster than the control group. The Adagio can be thought of as a new arrow in the Musicologist's quiver. It provides a new, unique way of viewing the psychological impact and esthetic effectiveness of music composition. Additionally, with the current worldwide access to multi-media via the internet, flow analysis can be performed and shared with others with little time and/or expense.

Keywords: musicology, music analysis, music flow, music therapy

Procedia PDF Downloads 174
1052 The Effect of Tool Path Strategy on Surface and Dimension in High Speed Milling

Authors: A. Razavykia, A. Esmaeilzadeh, S. Iranmanesh

Abstract:

Many orthopedic implants like proximal humerus cases require lower surface roughness and almost immediate/short lead time surgery. Thus, rapid response from the manufacturer is very crucial. Tool path strategy of milling process has a direct influence on the surface roughness and lead time of medical implant. High-speed milling as promised process would improve the machined surface quality, but conventional or super-abrasive grinding still required which imposes some drawbacks such as additional costs and time. Currently, many CAD/CAM software offers some different tool path strategies to milling free form surfaces. Nevertheless, the users must identify how to choose the strategies according to cutting tool geometry, geometry complexity, and their effects on the machined surface. This study investigates the effect of different tool path strategies for milling a proximal humerus head during finishing operation on stainless steel 316L. Experiments have been performed using MAHO MH700 S vertical milling machine and four machining strategies, namely, spiral outward, spiral inward, and radial as well as zig-zag. In all cases, the obtained surfaces were analyzed in terms of roughness and dimension accuracy compared with those obtained by simulation. The findings provide evidence that surface roughness, dimensional accuracy, and machining time have been affected by the considered tool path strategy.

Keywords: CAD/CAM software, milling, orthopedic implants, tool path strategy

Procedia PDF Downloads 210
1051 Commuters Trip Purpose Decision Tree Based Model of Makurdi Metropolis, Nigeria and Strategic Digital City Project

Authors: Emmanuel Okechukwu Nwafor, Folake Olubunmi Akintayo, Denis Alcides Rezende

Abstract:

Decision tree models are versatile and interpretable machine learning algorithms widely used for both classification and regression tasks, which can be related to cities, whether physical or digital. The aim of this research is to assess how well decision tree algorithms can predict trip purposes in Makurdi, Nigeria, while also exploring their connection to the strategic digital city initiative. The research methodology involves formalizing household demographic and trips information datasets obtained from extensive survey process. Modelling and Prediction were achieved using Python Programming Language and the evaluation metrics like R-squared and mean absolute error were used to assess the decision tree algorithm's performance. The results indicate that the model performed well, with accuracies of 84% and 68%, and low MAE values of 0.188 and 0.314, on training and validation data, respectively. This suggests the model can be relied upon for future prediction. The conclusion reiterates that This model will assist decision-makers, including urban planners, transportation engineers, government officials, and commuters, in making informed decisions on transportation planning and management within the framework of a strategic digital city. Its application will enhance the efficiency, sustainability, and overall quality of transportation services in Makurdi, Nigeria.

Keywords: decision tree algorithm, trip purpose, intelligent transport, strategic digital city, travel pattern, sustainable transport

Procedia PDF Downloads 11
1050 Integrating Dependent Material Planning Cycle into Building Information Management: A Building Information Management-Based Material Management Automation Framework

Authors: Faris Elghaish, Sepehr Abrishami, Mark Gaterell, Richard Wise

Abstract:

The collaboration and integration between all building information management (BIM) processes and tasks are necessary to ensure that all project objectives can be delivered. The literature review has been used to explore the state of the art BIM technologies to manage construction materials as well as the challenges which have faced the construction process using traditional methods. Thus, this paper aims to articulate a framework to integrate traditional material planning methods such as ABC analysis theory (Pareto principle) to analyse and categorise the project materials, as well as using independent material planning methods such as Economic Order Quantity (EOQ) and Fixed Order Point (FOP) into the BIM 4D, and 5D capabilities in order to articulate a dependent material planning cycle into BIM, which relies on the constructability method. Moreover, we build a model to connect between the material planning outputs and the BIM 4D and 5D data to ensure that all project information will be accurately presented throughout integrated and complementary BIM reporting formats. Furthermore, this paper will present a method to integrate between the risk management output and the material management process to ensure that all critical materials are monitored and managed under the all project stages. The paper includes browsers which are proposed to be embedded in any 4D BIM platform in order to predict the EOQ as well as FOP and alarm the user during the construction stage. This enables the planner to check the status of the materials on the site as well as to get alarm when the new order will be requested. Therefore, this will lead to manage all the project information in a single context and avoid missing any information at early design stage. Subsequently, the planner will be capable of building a more reliable 4D schedule by allocating the categorised material with the required EOQ to check the optimum locations for inventory and the temporary construction facilitates.

Keywords: building information management, BIM, economic order quantity, EOQ, fixed order point, FOP, BIM 4D, BIM 5D

Procedia PDF Downloads 171
1049 Technological Developments to Reduce Wind Blade Turbine Levelized Cost of Energy

Authors: Pedro Miguel Cardoso Carneiro, Ricardo André Nunes Borges, João Pedro Soares Loureiro, Hermínio Maio Graça Fernandes

Abstract:

Wind energy has been exponentially growing over the last years and will allow countries to progress regarding the decarbonization objective. In parallel, the maintenance activities have also been increasing in consequence of ageing and deterioration of the wind farms. The time available for wind blade maintenance is given by the weather window that is based upon weather conditions. Most of the wind blade repair and maintenance activities require a narrow window of temperature and humidity. Due to this limitation, the current weather windows result only on approximately 35% days/year are used for maintenance, that takes place mostly during summertime. This limitation creates large economic losses in the energy production of the wind towers, since they can be inoperative or with the energy production output reduced for days or weeks due to existing damages. Another important aspect is that the maintenance costs are higher due to the high standby time and seasonality imposed on the technicians. To reduce the relevant maintenance costs of blades and energy loses some technological developments were carried out to significantly improve this reality. The focus of this activity was to develop a series of key developments to have in the near future a suspended access equipment that can operate in harsh conditions, wind rain, cold/hot environment. To this end we have identified key areas that need to be revised and require new solutions to be found; a habitat system, multi-configurable roof and floor, roof and floor interface to blade, secondary attachment solutions to the blade and to the tower. On this paper we will describe the advances produced during a national R&D project made in partnership with an end-user (Onrope) and a test center (ISQ).

Keywords: wind turbine maintenance, cost reduction, technological innovations, wind turbine blade

Procedia PDF Downloads 90
1048 Characterization and Nanostructure Formation of Banana Peels Nanosorbent with Its Application

Authors: Opeyemi Atiba-Oyewo, Maurice S. Onyango, Christian Wolkersdorfer

Abstract:

Characterization and nanostructure formation of banana peels as sorbent material are described in this paper. The transformation of this agricultural waste via mechanical milling to enhance its properties such as changed in microstructure and surface area for water pollution control and other applications were studied. Mechanical milling was employed using planetary continuous milling machine with ethanol as a milling solvent and the samples were taken at time intervals between 10 h to 30 h to examine the structural changes. The samples were characterised by X-ray diffraction (XRD), scanning electron microscopy (SEM), Fourier transform infra-red (FTIR), Transmission electron microscopy (TEM) and Brunauer Emmett and teller (BET). Results revealed three typical structures with different deformation mechanisms and the grain-sizes within the range of (71-12 nm), nanostructure of the particles and fibres. The particle size decreased from 65µm to 15 nm as the milling progressed for a period of 30 h. The morphological properties of the materials indicated that the particle shapes becomes regular and uniform as the milling progresses. Furthermore, particles fracturing resulted in surface area increment from 1.0694-4.5547 m2/g. The functional groups responsible for the banana peels capacity to coordinate and remove metal ions, such as the carboxylic and amine groups were identified at absorption bands of 1730 and 889 cm-1, respectively. However, the choice of this sorbent material for the sorption or any application will depend on the composition of the pollutant to be eradicated.

Keywords: characterization, nanostructure, nanosorbent, eco-friendly, banana peels, mechanical milling, water quality

Procedia PDF Downloads 281
1047 Mining User-Generated Contents to Detect Service Failures with Topic Model

Authors: Kyung Bae Park, Sung Ho Ha

Abstract:

Online user-generated contents (UGC) significantly change the way customers behave (e.g., shop, travel), and a pressing need to handle the overwhelmingly plethora amount of various UGC is one of the paramount issues for management. However, a current approach (e.g., sentiment analysis) is often ineffective for leveraging textual information to detect the problems or issues that a certain management suffers from. In this paper, we employ text mining of Latent Dirichlet Allocation (LDA) on a popular online review site dedicated to complaint from users. We find that the employed LDA efficiently detects customer complaints, and a further inspection with the visualization technique is effective to categorize the problems or issues. As such, management can identify the issues at stake and prioritize them accordingly in a timely manner given the limited amount of resources. The findings provide managerial insights into how analytics on social media can help maintain and improve their reputation management. Our interdisciplinary approach also highlights several insights by applying machine learning techniques in marketing research domain. On a broader technical note, this paper illustrates the details of how to implement LDA in R program from a beginning (data collection in R) to an end (LDA analysis in R) since the instruction is still largely undocumented. In this regard, it will help lower the boundary for interdisciplinary researcher to conduct related research.

Keywords: latent dirichlet allocation, R program, text mining, topic model, user generated contents, visualization

Procedia PDF Downloads 185
1046 Sustainable Crop Mechanization among Small Scale Rural Farmers in Nigeria: The Hurdles

Authors: Charles Iledun Oyewole

Abstract:

The daunting challenge that the ‘man with the hoe’ is going to face in the coming decades will be complex and interwoven. With global population already above 7 billion people, it has been estimated that food (crop) production must more than double by 2050 to meet up with the world’s food requirements. Nigeria population is also expected to reach over 240 million people by 2050, at the current annual population growth of 2.61 per cent. The country’s farming population is estimated at over 65 per cent, but the country still depends on food importation to complement production. The small scale farmer, who depends on simple hand tools: hoes and cutlasses, remains the centre of agricultural production, accounting for 90 per cent of the total agricultural output and 80 per cent of the market flow. While the hoe may have been a tool for sustainable development at a time in human history, this role has been smothered by population growth, which has brought too many mouths to be fed (over 170 million), as well as many industries to fuel with raw materials. It may then be argued that the hoe is unfortunately not a tool for the coming challenges and that agricultural mechanization should be the focus. However, agriculture as an enterprise is a ‘complete wheel’ which does not work when broken, particularly, in respect to mechanization. Generally, mechanization will prompt increase production, where land is readily available; increase production, will require post-harvest handling mechanisms, crop processing and subsequent storage. An important aspect of this is readily available and favourable markets for such produce; fuel by good agricultural policies. A break in this wheel will lead to the process of mechanization crashing back to subsistence production, and probably reversal to the hoe. The focus of any agricultural policy should be to chart a course for sustainable mechanization that is environmentally friendly, that may ameliorate Nigeria’s food and raw material gaps. This is the focal point of this article.

Keywords: Crop production, Farmer, Hoes, Mechanization, Policy framework, Population, Growth, Rural areas

Procedia PDF Downloads 214
1045 Thermo-Mechanical Properties of PBI Fiber Reinforced HDPE Composites: Effect of Fiber Length and Composition

Authors: Shan Faiz, Arfat Anis, Saeed M. Al-Zarani

Abstract:

High density polyethylene (HDPE) and poly benzimidazole fiber (PBI) composites were prepared by melt blending in a twin screw extruder (TSE). The thermo-mechanical properties of PBI fiber reinforced HDPE composite samples (1%, 4% and 8% fiber content) of fiber lengths 3 mm and 6 mm were investigated using differential scanning calorimeter (DSC), universal testing machine (UTM), rheometer and scanning electron microscopy (SEM). The effect of fiber content and fiber lengths on the thermo-mechanical properties of the HDPE-PBI composites was studied. The DSC analysis showed decrease in crystallinity of HDPE-PBI composites with the increase of fiber loading. Maximum decrease observed was 12% at 8% fiber length. The thermal stability was found to increase with the addition of fiber. T50% was notably increased to 40oC for both grades of HDPE using 8% of fiber content. The mechanical properties were not much affected by the increase in fiber content. The optimum value of tensile strength was achieved using 4% fiber content and slight increase of 9% in tensile strength was observed. No noticeable change was observed in flexural strength. In rheology study, the complex viscosities of HDPE-PBI composites were higher than the HDPE matrix and substantially increased with even minimum increase of PBI fiber loading i.e. 1%. We found that the addition of the PBI fiber resulted in a modest improvement in the thermal stability and mechanical properties of the prepared composites.

Keywords: PBI fiber, high density polyethylene, composites, melt blending

Procedia PDF Downloads 362
1044 Computational Fluid Dynamics Simulation of Reservoir for Dwell Time Prediction

Authors: Nitin Dewangan, Nitin Kattula, Megha Anawat

Abstract:

Hydraulic reservoir is the key component in the mobile construction vehicles; most of the off-road earth moving construction machinery requires bigger side hydraulic reservoirs. Their reservoir construction is very much non-uniform and designers used such design to utilize the space available under the vehicle. There is no way to find out the space utilization of the reservoir by oil and validity of design except virtual simulation. Computational fluid dynamics (CFD) helps to predict the reservoir space utilization by vortex mapping, path line plots and dwell time prediction to make sure the design is valid and efficient for the vehicle. The dwell time acceptance criteria for effective reservoir design is 15 seconds. The paper will describe the hydraulic reservoir simulation which is carried out using CFD tool acuSolve using automated mesh strategy. The free surface flow and moving reference mesh is used to define the oil flow level inside the reservoir. The first baseline design is not able to meet the acceptance criteria, i.e., dwell time below 15 seconds because the oil entry and exit ports were very close. CFD is used to redefine the port locations for the reservoir so that oil dwell time increases in the reservoir. CFD also proposed baffle design the effective space utilization. The final design proposed through CFD analysis is used for physical validation on the machine.

Keywords: reservoir, turbulence model, transient model, level set, free-surface flow, moving frame of reference

Procedia PDF Downloads 148
1043 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder

Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh

Abstract:

In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.

Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization

Procedia PDF Downloads 111
1042 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories

Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos

Abstract:

Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.

Keywords: database, forensic genetics, genetic analysis, sample management, software solution

Procedia PDF Downloads 367
1041 A Real-Time Snore Detector Using Neural Networks and Selected Sound Features

Authors: Stelios A. Mitilineos, Nicolas-Alexander Tatlas, Georgia Korompili, Lampros Kokkalas, Stelios M. Potirakis

Abstract:

Obstructive Sleep Apnea Hypopnea Syndrome (OSAHS) is a widespread chronic disease that mostly remains undetected, mainly due to the fact that it is diagnosed via polysomnography which is a time and resource-intensive procedure. Screening the disease’s symptoms at home could be used as an alternative approach in order to alert individuals that potentially suffer from OSAHS without compromising their everyday routine. Since snoring is usually linked to OSAHS, developing a snore detector is appealing as an enabling technology for screening OSAHS at home using ubiquitous equipment like commodity microphones (included in, e.g., smartphones). In this context, this study developed a snore detection tool and herein present the approach and selection of specific sound features that discriminate snoring vs. environmental sounds, as well as the performance of the proposed tool. Furthermore, a Real-Time Snore Detector (RTSD) is built upon the snore detection tool and employed in whole-night sleep sound recordings resulting to a large dataset of snoring sound excerpts that are made freely available to the public. The RTSD may be used either as a stand-alone tool that offers insight to an individual’s sleep quality or as an independent component of OSAHS screening applications in future developments.

Keywords: obstructive sleep apnea hypopnea syndrome, apnea screening, snoring detection, machine learning, neural networks

Procedia PDF Downloads 204
1040 A Ku/K Band Power Amplifier for Wireless Communication and Radar Systems

Authors: Meng-Jie Hsiao, Cam Nguyen

Abstract:

Wide-band devices in Ku band (12-18 GHz) and K band (18-27 GHz) have received significant attention for high-data-rate communications and high-resolution sensing. Especially, devices operating around 24 GHz is attractive due to the 24-GHz unlicensed applications. One of the most important components in RF systems is power amplifier (PA). Various PAs have been developed in the Ku and K bands on GaAs, InP, and silicon (Si) processes. Although the PAs using GaAs or InP process could have better power handling and efficiency than those realized on Si, it is very hard to integrate the entire system on the same substrate for GaAs or InP. Si, on the other hand, facilitates single-chip systems. Hence, good PAs on Si substrate are desirable. Especially, Si-based PA having good linearity is necessary for next generation communication protocols implemented on Si. We report a 16.5 to 25.5 GHz Si-based PA having flat saturated power of 19.5 ± 1.5 dBm, output 1-dB power compression (OP1dB) of 16.5 ± 1.5 dBm, and 15-23 % power added efficiency (PAE). The PA consists of a drive amplifier, two main amplifiers, and lump-element Wilkinson power divider and combiner designed and fabricated in TowerJazz 0.18µm SiGe BiCMOS process having unity power gain frequency (fMAX) of more than 250 GHz. The PA is realized as a cascode amplifier implementing both heterojunction bipolar transistor (HBT) and n-channel metal–oxide–semiconductor field-effect transistor (NMOS) devices for gain, frequency response, and linearity consideration. Particularly, a body-floating technique is utilized for the NMOS devices to improve the voltage swing and eliminate parasitic capacitances. The developed PA has measured flat gain of 20 ± 1.5 dB across 16.5-25.5 GHz. At 24 GHz, the saturated power, OP1dB, and maximum PAE are 20.8 dBm, 18.1 dBm, and 23%, respectively. Its high performance makes it attractive for use in Ku/K-band, especially 24 GHz, communication and radar systems. This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Keywords: power amplifiers, amplifiers, communication systems, radar systems

Procedia PDF Downloads 105
1039 Sensitivity and Uncertainty Analysis of One Dimensional Shape Memory Alloy Constitutive Models

Authors: A. B. M. Rezaul Islam, Ernur Karadogan

Abstract:

Shape memory alloys (SMAs) are known for their shape memory effect and pseudoelasticity behavior. Their thermomechanical behaviors are modeled by numerous researchers using microscopic thermodynamic and macroscopic phenomenological point of view. Tanaka, Liang-Rogers and Ivshin-Pence models are some of the most popular SMA macroscopic phenomenological constitutive models. They describe SMA behavior in terms of stress, strain and temperature. These models involve material parameters and they have associated uncertainty present in them. At different operating temperatures, the uncertainty propagates to the output when the material is subjected to loading followed by unloading. The propagation of uncertainty while utilizing these models in real-life application can result in performance discrepancies or failure at extreme conditions. To resolve this, we used probabilistic approach to perform the sensitivity and uncertainty analysis of Tanaka, Liang-Rogers, and Ivshin-Pence models. Sobol and extended Fourier Amplitude Sensitivity Testing (eFAST) methods have been used to perform the sensitivity analysis for simulated isothermal loading/unloading at various operating temperatures. As per the results, it is evident that the models vary due to the change in operating temperature and loading condition. The average and stress-dependent sensitivity indices present the most significant parameters at several temperatures. This work highlights the sensitivity and uncertainty analysis results and shows comparison of them at different temperatures and loading conditions for all these models. The analysis presented will aid in designing engineering applications by eliminating the probability of model failure due to the uncertainty in the input parameters. Thus, it is recommended to have a proper understanding of sensitive parameters and the uncertainty propagation at several operating temperatures and loading conditions as per Tanaka, Liang-Rogers, and Ivshin-Pence model.

Keywords: constitutive models, FAST sensitivity analysis, sensitivity analysis, sobol, shape memory alloy, uncertainty analysis

Procedia PDF Downloads 139
1038 A Hybrid Genetic Algorithm and Neural Network for Wind Profile Estimation

Authors: M. Saiful Islam, M. Mohandes, S. Rehman, S. Badran

Abstract:

Increasing necessity of wind power is directing us to have precise knowledge on wind resources. Methodical investigation of potential locations is required for wind power deployment. High penetration of wind energy to the grid is leading multi megawatt installations with huge investment cost. This fact appeals to determine appropriate places for wind farm operation. For accurate assessment, detailed examination of wind speed profile, relative humidity, temperature and other geological or atmospheric parameters are required. Among all of these uncertainty factors influencing wind power estimation, vertical extrapolation of wind speed is perhaps the most difficult and critical one. Different approaches have been used for the extrapolation of wind speed to hub height which are mainly based on Log law, Power law and various modifications of the two. This paper proposes a Artificial Neural Network (ANN) and Genetic Algorithm (GA) based hybrid model, namely GA-NN for vertical extrapolation of wind speed. This model is very simple in a sense that it does not require any parametric estimations like wind shear coefficient, roughness length or atmospheric stability and also reliable compared to other methods. This model uses available measured wind speeds at 10m, 20m and 30m heights to estimate wind speeds up to 100m. A good comparison is found between measured and estimated wind speeds at 30m and 40m with approximately 3% mean absolute percentage error. Comparisons with ANN and power law, further prove the feasibility of the proposed method.

Keywords: wind profile, vertical extrapolation of wind, genetic algorithm, artificial neural network, hybrid machine learning

Procedia PDF Downloads 485
1037 MIM and Experimental Studies of the Thermal Drift in an Ultra-High Precision Instrument for Dimensional Metrology

Authors: Kamélia Bouderbala, Hichem Nouira, Etienne Videcoq, Manuel Girault, Daniel Petit

Abstract:

Thermal drifts caused by the power dissipated by the mechanical guiding systems constitute the main limit to enhance the accuracy of an ultra-high precision cylindricity measuring machine. For this reason, a high precision compact prototype has been designed to simulate the behaviour of the instrument. It ensures in situ calibration of four capacitive displacement probes by comparison with four laser interferometers. The set-up includes three heating wires for simulating the powers dissipated by the mechanical guiding systems, four additional heating wires located between each laser interferometer head and its respective holder, 19 Platinum resistance thermometers (Pt100) to observe the temperature evolution inside the set-up and four Pt100 sensors to monitor the ambient temperature. Both a Reduced Model (RM), based on the Modal Identification Method (MIM) was developed and optimized by comparison with the experimental results. Thereafter, time dependent tests were performed under several conditions to measure the temperature variation at 19 fixed positions in the system and compared to the calculated RM results. The RM results show good agreement with experiment and reproduce as well the temperature variations, revealing the importance of the RM proposed for the evaluation of the thermal behaviour of the system.

Keywords: modal identification method (MIM), thermal behavior and drift, dimensional metrology, measurement

Procedia PDF Downloads 392
1036 Dutch Disease and Industrial Development: An Investigation of the Determinants of Manufacturing Sector Performance in Nigeria

Authors: Kayode Ilesanmi Ebenezer Bowale, Dominic Azuh, Busayo Aderounmu, Alfred Ilesanmi

Abstract:

There has been a debate among scholars and policymakers about the effects of oil exploration and production on industrial development. In Nigeria, there were many reforms resulting in an increase in crude oil production in the recent past. There is a controversy on the importance of oil production in the development of the manufacturing sector in Nigeria. Some scholars claim that oil has been a blessing to the development of the manufacturing sector, while others regard it as a curse. The objective of the study is to determine if empirical analysis supports the presence of Dutch Disease and de-industrialisation in the Nigerian manufacturing sector between 2019- 2022. The study employed data that were sourced from World Development Indicators, Nigeria Bureau of Statistics, and the Central Bank of Nigeria Statistical Bulletin on manufactured exports, manufacturing employment, agricultural employment, and service employment in line with the theory of Dutch Disease using the unit root test to establish their level of stationarity, Engel and Granger cointegration test to check their long-run relationship. Autoregressive. Distributed Lagged bound test was also used. The Vector Error Correction Model will be carried out to determine the speed of adjustment of the manufacturing export and resource movement effect. The results showed that the Nigerian manufacturing industry suffered from both direct and indirect de-industrialisation over the period. The findings also revealed that there was resource movement as labour moved away from the manufacturing sector to both the oil sector and the services sector. The study concluded that there was the presence of Dutch Disease in the manufacturing industry, and the problem of de-industrialisation led to the crowding out of manufacturing output. The study recommends that efforts should be made to diversify the Nigerian economy. Furthermore, a conducive business environment should be provided to encourage more involvement of the private sector in the agriculture and manufacturing sectors of the economy.

Keywords: Dutch disease, resource movement, manufacturing sector performance, Nigeria

Procedia PDF Downloads 75
1035 Artificial Intelligence Assisted Sentiment Analysis of Hotel Reviews Using Topic Modeling

Authors: Sushma Ghogale

Abstract:

With a surge in user-generated content or feedback or reviews on the internet, it has become possible and important to know consumers' opinions about products and services. This data is important for both potential customers and businesses providing the services. Data from social media is attracting significant attention and has become the most prominent channel of expressing an unregulated opinion. Prospective customers look for reviews from experienced customers before deciding to buy a product or service. Several websites provide a platform for users to post their feedback for the provider and potential customers. However, the biggest challenge in analyzing such data is in extracting latent features and providing term-level analysis of the data. This paper proposes an approach to use topic modeling to classify the reviews into topics and conduct sentiment analysis to mine the opinions. This approach can analyse and classify latent topics mentioned by reviewers on business sites or review sites, or social media using topic modeling to identify the importance of each topic. It is followed by sentiment analysis to assess the satisfaction level of each topic. This approach provides a classification of hotel reviews using multiple machine learning techniques and comparing different classifiers to mine the opinions of user reviews through sentiment analysis. This experiment concludes that Multinomial Naïve Bayes classifier produces higher accuracy than other classifiers.

Keywords: latent Dirichlet allocation, topic modeling, text classification, sentiment analysis

Procedia PDF Downloads 94
1034 Bridge Health Monitoring: A Review

Authors: Mohammad Bakhshandeh

Abstract:

Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.

Keywords: structural health monitoring (SHM), bridge health monitoring (BHM), sensor-based methods, machine-learning algorithms, and model-based techniques, sensor placement, data acquisition, data analysis

Procedia PDF Downloads 87
1033 Measuring Greenhouse Gas Exchange from Paddy Field Using Eddy Covariance Method in Mekong Delta, Vietnam

Authors: Vu H. N. Khue, Marian Pavelka, Georg Jocher, Jiří Dušek, Le T. Son, Bui T. An, Ho Q. Bang, Pham Q. Huong

Abstract:

Agriculture is an important economic sector of Vietnam, the most popular of which is wet rice cultivation. These activities are also known as the main contributor to the national greenhouse gas. In order to understand more about greenhouse gas exchange in these activities and to investigate the factors influencing carbon cycling and sequestration in these types of ecosystems, since 2019, the first eddy covariance station has been installed in a paddy field in Long An province, Mekong Delta. The station was equipped with state-of-the-art equipment for CO₂ and CH₄ gas exchange and micrometeorology measurements. In this study, data from the station was processed following the ICOS recommendations (Integrated Carbon Observation System) standards for CO₂, while CH₄ was manually processed and gap-filled using a random forest model from methane-gapfill-ml, a machine learning package, as there is no standard method for CH₄ flux gap-filling yet. Finally, the carbon equivalent (Ce) balance based on CO₂ and CH₄ fluxes was estimated. The results show that in 2020, even though a new water management practice - alternate wetting and drying - was applied to reduce methane emissions, the paddy field released 928 g Cₑ.m⁻².yr⁻¹, and in 2021, it was reduced to 707 g Cₑ.m⁻².yr⁻¹. On a provincial level, rice cultivation activities in Long An, with a total area of 498,293 ha, released 4.6 million tons of Cₑ in 2020 and 3.5 million tons of Cₑ in 2021.

Keywords: eddy covariance, greenhouse gas, methane, rice cultivation, Mekong Delta

Procedia PDF Downloads 139
1032 The Effect of Alkaline Treatment on Tensile Strength and Morphological Properties of Kenaf Fibres for Yarn Production

Authors: A. Khalina, K. Shaharuddin, M. S. Wahab, M. P. Saiman, H. A. Aisyah

Abstract:

This paper investigates the effect of alkali treatment and mechanical properties of kenaf (Hibiscus cannabinus) fibre for the development of yarn. Two different fibre sources are used for the yarn production. Kenaf fibres were treated with sodium hydroxide (NaOH) in the concentration of 3, 6, 9, and 12% prior to fibre opening process and tested for their tensile strength and Young’s modulus. Then, the selected fibres were introduced to fibre opener at three different opening processing parameters; namely, speed of roller feeder, small drum, and big drum. The diameter size, surface morphology, and fibre durability towards machine of the fibres were characterized. The results show that concentrations of NaOH used have greater effects on fibre mechanical properties. From this study, the tensile and modulus properties of the treated fibres for both types have improved significantly as compared to untreated fibres, especially at the optimum level of 6% NaOH. It is also interesting to highlight that 6% NaOH is the optimum concentration for the alkaline treatment. The untreated and treated fibres at 6% NaOH were then introduced to fibre opener, and it was found that the treated fibre produced higher fibre diameter with better surface morphology compared to the untreated fibre. Higher speed parameter during opening was found to produce higher yield of opened-kenaf fibres.

Keywords: alkaline treatment, kenaf fibre, tensile strength, yarn production

Procedia PDF Downloads 241
1031 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: consensus, curse of correlation, imbalance classification, rank-based chain-mode ensemble

Procedia PDF Downloads 133
1030 Using Seismic and GPS Data for Hazard Estimation in Some Active Regions in Egypt

Authors: Abdel-Monem Sayed Mohamed

Abstract:

Egypt rapidly growing development is accompanied by increasing levels of standard living particular in its urban areas. However, there is a limited experience in quantifying the sources of risk management in Egypt and in designing efficient strategies to keep away serious impacts of earthquakes. From the historical point of view and recent instrumental records, there are some seismo-active regions in Egypt, where some significant earthquakes had occurred in different places. The special tectonic features in Egypt: Aswan, Greater Cairo, Red Sea and Sinai Peninsula regions are the territories of a high seismic risk, which have to be monitored by up-to date technologies. The investigations of the seismic events and interpretations led to evaluate the seismic hazard for disaster prevention and for the safety of the dense populated regions and the vital national projects as the High Dam. In addition to the monitoring of the recent crustal movements, the most powerful technique of satellite geodesy GPS are used where geodetic networks are covering such seismo-active regions. The results from the data sets are compared and combined in order to determine the main characteristics of the deformation and hazard estimation for specified regions. The final compiled output from the seismological and geodetic analysis threw lights upon the geodynamical regime of these seismo-active regions and put Aswan and Greater Cairo under the lowest class according to horizontal crustal strains classifications. This work will serve a basis for the development of so-called catastrophic models and can be further used for catastrophic risk management. Also, this work is trying to evaluate risk of large catastrophic losses within the important regions including the High Dam, strategic buildings and archeological sites. Studies on possible scenarios of earthquakes and losses are a critical issue for decision making in insurance as a part of mitigation measures.

Keywords: b-value, Gumbel distribution, seismic and GPS data, strain parameters

Procedia PDF Downloads 456
1029 2.4 GHz 0.13µM Multi Biased Cascode Power Amplifier for ISM Band Wireless Applications

Authors: Udayan Patankar, Shashwati Bhagat, Vilas Nitneware, Ants Koel

Abstract:

An ISM band power amplifier is a type of electronic amplifier used to convert a low-power radio-frequency signal into a larger signal of significant power, typically used for driving the antenna of a transmitter. Due to drastic changes in telecommunication generations may lead to the requirements of improvements. Rapid changes in communication lead to the wide implementation of WLAN technology for its excellent characteristics, such as high transmission speed, long communication distance, and high reliability. Many applications such as WLAN, Bluetooth, and ZigBee, etc. were evolved with 2.4GHz to 5 GHz ISM Band, in which the power amplifier (PA) is a key building block of RF transmitters. There are many manufacturing processes available to manufacture a power amplifier for desired power output, but the major problem they have faced is about the power it consumed for its proper working, as many of them are fabricated on the GaN HEMT, Bi COMS process. In this paper we present a CMOS Base two stage cascode design of power amplifier working on 2.4GHz ISM frequency band. To lower the costs and allow full integration of a complete System-on-Chip (SoC) we have chosen 0.13µm low power CMOS technology for design. While designing a power amplifier, it is a real task to achieve higher power efficiency with minimum resources. This design showcase the Multi biased Cascode methodology to implement a two-stage CMOS power amplifier using ADS and LTSpice simulating tool. Main source is maximum of 2.4V which is internally distributed into different biasing point VB driving and VB driven as required for distinct stages of two stage RF power amplifier. It shows maximum power added efficiency near about 70.195% whereas its Power added efficiency calculated at 1 dB compression point is 44.669 %. Biased MOSFET is used to reduce total dc current as this circuit is designed for different wireless applications comes under 2.4GHz ISM Band.

Keywords: RFIC, PAE, RF CMOS, impedance matching

Procedia PDF Downloads 219
1028 Vascular Crossed Aphasia in Dextrals: A Study on Bengali-Speaking Population in Eastern India

Authors: Durjoy Lahiri, Vishal Madhukar Sawale, Ashwani Bhat, Souvik Dubey, Gautam Das, Biman Kanti Roy, Suparna Chatterjee, Goutam Gangopadhyay

Abstract:

Crossed aphasia has been an area of considerable interest for cognitive researchers as it offers a fascinating insight into cerebral lateralization for language function. We conducted an observational study in the stroke unit of a tertiary care neurology teaching hospital in eastern India on subjects with crossed aphasia over a period of four years. During the study period, we detected twelve cases of crossed aphasia in strongly right-handed patients, caused by ischemic stroke. The age, gender, vernacular language and educational status of the patients were noted. Aphasia type and severity were assessed using Bengali version of Western Aphasia Battery (validated). Computed tomography, magnetic resonance imaging and angiography were used to evaluate the location and extent of the ischemic lesion in brain. Our series of 12 cases of crossed aphasia included 7 male and 5 female with mean age being 58.6 years. Eight patients were found to have Broca’s aphasia, 3 had trans-cortical motor aphasia and 1 patient suffered from global aphasia. Nine patients were having very severe aphasia and 3 suffered from mild aphasia. Mirror-image type of crossed aphasia was found in 3 patients, whereas 9 had anomalous variety. In our study crossed aphasia was found to be more frequent in males. Anomalous pattern was more common than mirror-image. Majority of the patients had motor-type aphasia and no patient was found to have pure comprehension deficit. We hypothesize that in Bengali-speaking right-handed population, lexical-semantic system of the language network remains loyal to the left hemisphere even if the phonological output system is anomalously located in the right hemisphere.

Keywords: aphasia, crossed, lateralization, language function, vascular

Procedia PDF Downloads 185
1027 Client Hacked Server

Authors: Bagul Abhijeet

Abstract:

Background: Client-Server model is the backbone of today’s internet communication. In which normal user can not have control over particular website or server? By using the same processing model one can have unauthorized access to particular server. In this paper, we discussed about application scenario of hacking for simple website or server consist of unauthorized way to access the server database. This application emerges to autonomously take direct access of simple website or server and retrieve all essential information maintain by administrator. In this system, IP address of server given as input to retrieve user-id and password of server. This leads to breaking administrative security of server and acquires the control of server database. Whereas virus helps to escape from server security by crashing the whole server. Objective: To control malicious attack and preventing all government website, and also find out illegal work to do hackers activity. Results: After implementing different hacking as well as non-hacking techniques, this system hacks simple web sites with normal security credentials. It provides access to server database and allow attacker to perform database operations from client machine. Above Figure shows the experimental result of this application upon different servers and provides satisfactory results as required. Conclusion: In this paper, we have presented a to view to hack the server which include some hacking as well as non-hacking methods. These algorithms and methods provide efficient way to hack server database. By breaking the network security allow to introduce new and better security framework. The terms “Hacking” not only consider for its illegal activities but also it should be use for strengthen our global network.

Keywords: Hacking, Vulnerabilities, Dummy request, Virus, Server monitoring

Procedia PDF Downloads 249
1026 Temperature Evolution, Microstructure and Mechanical Properties of Heat-Treatable Aluminum Alloy Welded by Friction Stir Welding: Comparison with Tungsten Inert Gas

Authors: Saliha Gachi, Mouloud Aissani, Fouad Boubenider

Abstract:

Friction Stir Welding (FSW) is a solid-state welding technique that can join material without melting the plates to be welded. In this work, we are interested to demonstrate the potentiality of FSW for joining the heat-treatable aluminum alloy 2024-T3 which is reputed as difficult to be welded by fusion techniques. Thereafter, the FSW joint is compared with another one obtained from a conventional fusion process Tungsten Inert Gas (TIG). FSW welds are made up using an FSW tool mounted on a milling machine. Single pass welding was applied to fabricated TIG joint. The comparison between the two processes has been made on the temperature evolution, mechanical and microstructure behavior. The microstructural examination revealed that FSW weld is composed of four zones: Base metal (BM), Heat affected zone (HAZ), Thermo-mechanical affected zone (THAZ) and the nugget zone (NZ). The NZ exhibits a recrystallized equiaxed refined grains that induce better mechanical properties and good ductility compared to TIG joint where the grains have a larger size in the welded region compared with the BM due to the elevated heat input. The microhardness results show that, in FSW weld, the THAZ contains the lowest microhardness values and increase in the NZ; however, in TIG process, the lowest values are localized on the NZ.

Keywords: friction stir welding, tungsten inert gaz, aluminum, microstructure

Procedia PDF Downloads 273
1025 Decision Making System for Clinical Datasets

Authors: P. Bharathiraja

Abstract:

Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.

Keywords: decision making, data mining, normalization, fuzzy rule, classification

Procedia PDF Downloads 514