Search results for: high accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22392

Search results for: high accuracy

21522 A Quantitative Evaluation of Text Feature Selection Methods

Authors: B. S. Harish, M. B. Revanasiddappa

Abstract:

Due to rapid growth of text documents in digital form, automated text classification has become an important research in the last two decades. The major challenge of text document representations are high dimension, sparsity, volume and semantics. Since the terms are only features that can be found in documents, selection of good terms (features) plays an very important role. In text classification, feature selection is a strategy that can be used to improve classification effectiveness, computational efficiency and accuracy. In this paper, we present a quantitative analysis of most widely used feature selection (FS) methods, viz. Term Frequency-Inverse Document Frequency (tfidf ), Mutual Information (MI), Information Gain (IG), CHISquare (x2), Term Frequency-Relevance Frequency (tfrf ), Term Strength (TS), Ambiguity Measure (AM) and Symbolic Feature Selection (SFS) to classify text documents. We evaluated all the feature selection methods on standard datasets like 20 Newsgroups, 4 University dataset and Reuters-21578.

Keywords: classifiers, feature selection, text classification

Procedia PDF Downloads 455
21521 Dynamic Foot Pressure Measurement System Using Optical Sensors

Authors: Tanapon Keatsamarn, Chuchart Pintavirooj

Abstract:

Foot pressure measurement provides necessary information for diagnosis diseases, foot insole design, disorder prevention and other application. In this paper, dynamic foot pressure measurement is presented for pressure measuring with high resolution and accuracy. The dynamic foot pressure measurement system consists of hardware and software system. The hardware system uses a transparent acrylic plate and uses steel as the base. The glossy white paper is placed on the top of the transparent acrylic plate and covering with a black acrylic on the system to block external light. Lighting from LED strip entering around the transparent acrylic plate. The optical sensors, the digital cameras, are underneath the acrylic plate facing upwards. They have connected with software system to process and record foot pressure video in avi file. Visual Studio 2017 is used for software system using OpenCV library.

Keywords: foot, foot pressure, image processing, optical sensors

Procedia PDF Downloads 242
21520 Classification for Obstructive Sleep Apnea Syndrome Based on Random Forest

Authors: Cheng-Yu Tsai, Wen-Te Liu, Shin-Mei Hsu, Yin-Tzu Lin, Chi Wu

Abstract:

Background: Obstructive Sleep apnea syndrome (OSAS) is a common respiratory disorder during sleep. In addition, Body parameters were identified high predictive importance for OSAS severity. However, the effects of body parameters on OSAS severity remain unclear. Objective: In this study, the objective is to establish a prediction model for OSAS by using body parameters and investigate the effects of body parameters in OSAS. Methodologies: Severity was quantified as the polysomnography and the mean hourly number of greater than 3% dips in oxygen saturation during examination in a hospital in New Taipei City (Taiwan). Four levels of OSAS severity were classified by the apnea and hypopnea index (AHI) with American Academy of Sleep Medicine (AASM) guideline. Body parameters, including neck circumference, waist size, and body mass index (BMI) were obtained from questionnaire. Next, dividing the collecting subjects into two groups: training and testing groups. The training group was used to establish the random forest (RF) to predicting, and test group was used to evaluated the accuracy of classification. Results: There were 3330 subjects recruited in this study, whom had been done polysomnography for evaluating severity for OSAS. A RF of 1000 trees achieved correctly classified 79.94 % of test cases. When further evaluated on the test cohort, RF showed the waist and BMI as the high import factors in OSAS. Conclusion It is possible to provide patient with prescreening by body parameters which can pre-evaluate the health risks.

Keywords: apnea and hypopnea index, Body parameters, obstructive sleep apnea syndrome, Random Forest

Procedia PDF Downloads 149
21519 High Temperature Deformation Behavior of Al0.2CoCrFeNiMo0.5 High Entropy alloy

Authors: Yasam Palguna, Rajesh Korla

Abstract:

The efficiency of thermally operated systems can be improved by increasing the operating temperature, thereby decreasing the fuel consumption and carbon footprint. Hence, there is a continuous need for replacing the existing materials with new alloys with higher temperature working capabilities. During the last decade, multi principal element alloys, commonly known as high entropy alloys are getting more attention because of their superior high temperature strength along with good high temperature corrosion and oxidation resistance, The present work focused on the microstructure and high temperature tensile behavior of Al0.2CoCrFeNiMo0.5 high entropy alloy (HEA). Wrought Al0.2CoCrFeNiMo0.5 high entropy alloy, produced by vacuum induction melting followed by thermomechanical processing, is tested in the temperature range of 200 to 900oC. It is exhibiting very good resistance to softening with increasing temperature up to 700oC, and thereafter there is a rapid decrease in the strength, especially beyond 800oC, which may be due to simultaneous occurrence of recrystallization and precipitate coarsening. Further, it is exhibiting superplastic kind of behavior with a uniform elongation of ~ 275 % at 900 oC temperature and 1 x 10-3 s-1 strain rate, which may be due to the presence of fine stable equi-axed grains. Strain rate sensitivity of 0.3 was observed, suggesting that solute drag dislocation glide might be the active mechanism during superplastic kind of deformation. Post deformation microstructure suggesting that cavitation at the sigma phase-matrix interface is the failure mechanism during high temperature deformation. Finally, high temperature properties of the present alloy will be compared with the contemporary high temperature materials such as ferritic, austenitic steels, and superalloys.

Keywords: high entropy alloy, high temperature deformation, super plasticity, post-deformation microstructures

Procedia PDF Downloads 160
21518 An Approach to Noise Variance Estimation in Very Low Signal-to-Noise Ratio Stochastic Signals

Authors: Miljan B. Petrović, Dušan B. Petrović, Goran S. Nikolić

Abstract:

This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.

Keywords: noise, signal-to-noise ratio, stochastic signals, variance estimation

Procedia PDF Downloads 383
21517 Identification of Landslide Features Using Back-Propagation Neural Network on LiDAR Digital Elevation Model

Authors: Chia-Hao Chang, Geng-Gui Wang, Jee-Cheng Wu

Abstract:

The prediction of a landslide is a difficult task because it requires a detailed study of past activities using a complete range of investigative methods to determine the changing condition. In this research, first step, LiDAR 1-meter by 1-meter resolution of digital elevation model (DEM) was used to generate six environmental factors of landslide. Then, back-propagation neural networks (BPNN) was adopted to identify scarp, landslide areas and non-landslide areas. The BPNN uses 6 environmental factors in input layer and 1 output layer. Moreover, 6 landslide areas are used as training areas and 4 landslide areas as test areas in the BPNN. The hidden layer is set to be 1 and 2; the hidden layer neurons are set to be 4, 5, 6, 7 and 8; the learning rates are set to be 0.01, 0.1 and 0.5. When using 1 hidden layer with 7 neurons and the learning rate sets to be 0.5, the result of Network training root mean square error is 0.001388. Finally, evaluation of BPNN classification accuracy by the confusion matrix shows that the overall accuracy can reach 94.4%, and the Kappa value is 0.7464.

Keywords: digital elevation model, DEM, environmental factors, back-propagation neural network, BPNN, LiDAR

Procedia PDF Downloads 137
21516 Hyperspectral Band Selection for Oil Spill Detection Using Deep Neural Network

Authors: Asmau Mukhtar Ahmed, Olga Duran

Abstract:

Hydrocarbon (HC) spills constitute a significant problem that causes great concern to the environment. With the latest technology (hyperspectral images) and state of the earth techniques (image processing tools), hydrocarbon spills can easily be detected at an early stage to mitigate the effects caused by such menace. In this study; a controlled laboratory experiment was used, and clay soil was mixed and homogenized with different hydrocarbon types (diesel, bio-diesel, and petrol). The different mixtures were scanned with HYSPEX hyperspectral camera under constant illumination to generate the hypersectral datasets used for this experiment. So far, the Short Wave Infrared Region (SWIR) has been exploited in detecting HC spills with excellent accuracy. However, the Near-Infrared Region (NIR) is somewhat unexplored with regards to HC contamination and how it affects the spectrum of soils. In this study, Deep Neural Network (DNN) was applied to the controlled datasets to detect and quantify the amount of HC spills in soils in the Near-Infrared Region. The initial results are extremely encouraging because it indicates that the DNN was able to identify features of HC in the Near-Infrared Region with a good level of accuracy.

Keywords: hydrocarbon, Deep Neural Network, short wave infrared region, near-infrared region, hyperspectral image

Procedia PDF Downloads 108
21515 Identification of Vessel Class with Long Short-Term Memory Using Kinematic Features in Maritime Traffic Control

Authors: Davide Fuscà, Kanan Rahimli, Roberto Leuzzi

Abstract:

Preventing abuse and illegal activities in a given area of the sea is a very difficult and expensive task. Artificial intelligence offers the possibility to implement new methods to identify the vessel class type from the kinematic features of the vessel itself. The task strictly depends on the quality of the data. This paper explores the application of a deep, long short-term memory model by using AIS flow only with a relatively low quality. The proposed model reaches high accuracy on detecting nine vessel classes representing the most common vessel types in the Ionian-Adriatic Sea. The model has been applied during the Adriatic-Ionian trial period of the international EU ANDROMEDA H2020 project to identify vessels performing behaviors far from the expected one depending on the declared type.

Keywords: maritime surveillance, artificial intelligence, behavior analysis, LSTM

Procedia PDF Downloads 227
21514 Emotion Recognition Using Artificial Intelligence

Authors: Rahul Mohite, Lahcen Ouarbya

Abstract:

This paper focuses on the interplay between humans and computer systems and the ability of these systems to understand and respond to human emotions, including non-verbal communication. Current emotion recognition systems are based solely on either facial or verbal expressions. The limitation of these systems is that it requires large training data sets. The paper proposes a system for recognizing human emotions that combines both speech and emotion recognition. The system utilizes advanced techniques such as deep learning and image recognition to identify facial expressions and comprehend emotions. The results show that the proposed system, based on the combination of facial expression and speech, outperforms existing ones, which are based solely either on facial or verbal expressions. The proposed system detects human emotion with an accuracy of 86%, whereas the existing systems have an accuracy of 70% using verbal expression only and 76% using facial expression only. In this paper, the increasing significance and demand for facial recognition technology in emotion recognition are also discussed.

Keywords: facial reputation, expression reputation, deep gaining knowledge of, photo reputation, facial technology, sign processing, photo type

Procedia PDF Downloads 113
21513 Two-Photon Ionization of Silver Clusters

Authors: V. Paployan, K. Madoyan, A. Melikyan, H. Minassian

Abstract:

Resonant two-photon ionization (TPI) is a valuable technique for the study of clusters due to its ultrahigh sensitivity. The comparison of the observed TPI spectra with results of calculations allows to deduce important information on the shape, rotational and vibrational temperatures of the clusters with high accuracy. In this communication we calculate the TPI cross-section for pump-probe scheme in Ag neutral cluster. The pump photon energy is chosen to be close to the surface plasmon (SP) energy of cluster in dielectric media. Since the interband transition energy in Ag exceeds the SP resonance energy, the main contribution into the TPI comes from the latter. The calculations are performed by separating the coordinates of electrons corresponding to the collective oscillations and the individual motion that allows to take into account the resonance contribution of excited SP oscillations. It is shown that the ionization cross section increases by two orders of magnitude if the energy of the pump photon matches the surface plasmon energy in the cluster.

Keywords: resonance enhancement, silver clusters, surface plasmon, two-photon ionization

Procedia PDF Downloads 423
21512 A Study on ESD Protection Circuit Applying Silicon Controlled Rectifier-Based Stack Technology with High Holding Voltage

Authors: Hee-Guk Chae, Bo-Bae Song, Kyoung-Il Do, Jeong-Yun Seo, Yong-Seo Koo

Abstract:

In this study, an improved Electrostatic Discharge (ESD) protection circuit with low trigger voltage and high holding voltage is proposed. ESD has become a serious problem in the semiconductor process because the semiconductor density has become very high these days. Therefore, much research has been done to prevent ESD. The proposed circuit is a stacked structure of the new unit structure combined by the Zener Triggering (SCR ZTSCR) and the High Holding Voltage SCR (HHVSCR). The simulation results show that the proposed circuit has low trigger voltage and high holding voltage. And the stack technology is applied to adjust the various operating voltage. As the results, the holding voltage is 7.7 V for 2-stack and 10.7 V for 3-stack.

Keywords: ESD, SCR, latch-up, power clamp, holding voltage

Procedia PDF Downloads 542
21511 Obstacle Classification Method Based on 2D LIDAR Database

Authors: Moohyun Lee, Soojung Hur, Yongwan Park

Abstract:

In this paper is proposed a method uses only LIDAR system to classification an obstacle and determine its type by establishing database for classifying obstacles based on LIDAR. The existing LIDAR system, in determining the recognition of obstruction in an autonomous vehicle, has an advantage in terms of accuracy and shorter recognition time. However, it was difficult to determine the type of obstacle and therefore accurate path planning based on the type of obstacle was not possible. In order to overcome this problem, a method of classifying obstacle type based on existing LIDAR and using the width of obstacle materials was proposed. However, width measurement was not sufficient to improve accuracy. In this research, the width data was used to do the first classification; database for LIDAR intensity data by four major obstacle materials on the road were created; comparison is made to the LIDAR intensity data of actual obstacle materials; and determine the obstacle type by finding the one with highest similarity values. An experiment using an actual autonomous vehicle under real environment shows that data declined in quality in comparison to 3D LIDAR and it was possible to classify obstacle materials using 2D LIDAR.

Keywords: obstacle, classification, database, LIDAR, segmentation, intensity

Procedia PDF Downloads 340
21510 Waterproofing Agent in Concrete for Tensile Improvement

Authors: Muhamad Azani Yahya, Umi Nadiah Nor Ali, Mohammed Alias Yusof, Norazman Mohamad Nor, Vikneswaran Munikanan

Abstract:

In construction, concrete is one of the materials that can commonly be used as for structural elements. Concrete consists of cement, sand, aggregate and water. Concrete can be added with admixture in the wet condition to suit the design purpose such as to prolong the setting time to improve workability. For strength improvement, concrete is being added with other hybrid materials to increase strength; this is because the tensile strength of concrete is very low in comparison to the compressive strength. This paper shows the usage of a waterproofing agent in concrete to enhance the tensile strength. High tensile concrete is expensive because the concrete mix needs fiber and also high cement content to be incorporated in the mix. High tensile concrete being used for structures that are being imposed by high impact dynamic load such as blast loading that hit the structure. High tensile concrete can be defined as a concrete mix design that achieved 30%-40% tensile strength compared to its compression strength. This research evaluates the usage of a waterproofing agent in a concrete mix as an element of reinforcement to enhance the tensile strength. According to the compression and tensile test, it shows that the concrete mix with a waterproofing agent enhanced the mechanical properties of the concrete. It is also show that the composite concrete with waterproofing is a high tensile concrete; this is because of the tensile is between 30% and 40% of the compression strength. This mix is economical because it can produce high tensile concrete with low cost.

Keywords: high tensile concrete, waterproofing agent, concrete, rheology

Procedia PDF Downloads 322
21509 Dissolved Gas Analysis Based Regression Rules from Trained ANN for Transformer Fault Diagnosis

Authors: Deepika Bhalla, Raj Kumar Bansal, Hari Om Gupta

Abstract:

Dissolved Gas Analysis (DGA) has been widely used for fault diagnosis in a transformer. Artificial neural networks (ANN) have high accuracy but are regarded as black boxes that are difficult to interpret. For many problems it is desired to extract knowledge from trained neural networks (NN) so that the user can gain a better understanding of the solution arrived by the NN. This paper applies a pedagogical approach for rule extraction from function approximating neural networks (REFANN) with application to incipient fault diagnosis using the concentrations of the dissolved gases within the transformer oil, as the input to the NN. The input space is split into subregions and for each subregion there is a linear equation that is used to predict the type of fault developing within a transformer. The experiments on real data indicate that the approach used can extract simple and useful rules and give fault predictions that match the actual fault and are at times also better than those predicted by the IEC method.

Keywords: artificial neural networks, dissolved gas analysis, rules extraction, transformer

Procedia PDF Downloads 531
21508 Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field

Authors: Ismaila Olabisi Ogundiji, Hakeem Mayowa Olujide, Qasim Usamot

Abstract:

The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.

Keywords: soil electrical conductivity, adaptive sampling, conditional simulation, spatial uncertainty, site-specific management

Procedia PDF Downloads 127
21507 A Numerical Study of the Tidal Currents in the Persian Gulf and Oman Sea

Authors: Fatemeh Sadat Sharifi, A. A. Bidokhti, M. Ezam, F. Ahmadi Givi

Abstract:

This study focuses on the tidal oscillation and its speed to create a general pattern in seas. The purpose of the analysis is to find out the amplitude and phase for several important tidal components. Therefore, Regional Ocean Models (ROMS) was rendered to consider the correlation and accuracy of this pattern. Finding tidal harmonic components allows us to predict tide at this region. Better prediction of these tides, making standard platform, making suitable wave breakers, helping coastal building, navigation, fisheries, port management and tsunami research. Result shows a fair accuracy in the SSH. It reveals tidal currents are highest in Hormuz Strait and the narrow and shallow region between Kish Island. To investigate flow patterns of the region, the results of limited size model of FVCOM were utilized. Many features of the present day view of ocean circulation have some precedent in tidal and long- wave studies. Tidal waves are categorized to be among the long waves. So that tidal currents studies have indeed effects in subsequent studies of sea and ocean circulations.

Keywords: barotropic tide, FVCOM, numerical model, OTPS, ROMS

Procedia PDF Downloads 225
21506 An Intelligent Traffic Management System Based on the WiFi and Bluetooth Sensing

Authors: Hamed Hossein Afshari, Shahrzad Jalali, Amir Hossein Ghods, Bijan Raahemi

Abstract:

This paper introduces an automated clustering solution that applies to WiFi/Bluetooth sensing data and is later used for traffic management applications. The paper initially summarizes a number of clustering approaches and thereafter shows their performance for noise removal. In this context, clustering is used to recognize WiFi and Bluetooth MAC addresses that belong to passengers traveling by a public urban transit bus. The main objective is to build an intelligent system that automatically filters out MAC addresses that belong to persons located outside the bus for different routes in the city of Ottawa. The proposed intelligent system alleviates the need for defining restrictive thresholds that however reduces the accuracy as well as the range of applicability of the solution for different routes. This paper moreover discusses the performance benefits of the presented clustering approaches in terms of the accuracy, time and space complexity, and the ease of use. Note that results of clustering can further be used for the purpose of the origin-destination estimation of individual passengers, predicting the traffic load, and intelligent management of urban bus schedules.

Keywords: WiFi-Bluetooth sensing, cluster analysis, artificial intelligence, traffic management

Procedia PDF Downloads 236
21505 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids

Authors: S. Gariani, I. Shyha

Abstract:

Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.

Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions

Procedia PDF Downloads 275
21504 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India

Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit

Abstract:

Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.

Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique

Procedia PDF Downloads 124
21503 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly

Authors: Alex Eldo Simon, Abhishek Yadav

Abstract:

This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.

Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio

Procedia PDF Downloads 78
21502 Smoker Recognition from Lung X-Ray Images Using Convolutional Neural Network

Authors: Moumita Chanda, Md. Fazlul Karim Patwary

Abstract:

Smoking is one of the most popular recreational drug use behaviors, and it contributes to birth defects, COPD, heart attacks, and erectile dysfunction. To completely eradicate this disease, it is imperative that it be identified and treated. Numerous smoking cessation programs have been created, and they demonstrate how beneficial it may be to help someone stop smoking at the ideal time. A tomography meter is an effective smoking detector. Other wearables, such as RF-based proximity sensors worn on the collar and wrist to detect when the hand is close to the mouth, have been proposed in the past, but they are not impervious to deceptive variables. In this study, we create a machine that can discriminate between smokers and non-smokers in real-time with high sensitivity and specificity by watching and collecting the human lung and analyzing the X-ray data using machine learning. If it has the highest accuracy, this machine could be utilized in a hospital, in the selection of candidates for the army or police, or in university entrance.

Keywords: CNN, smoker detection, non-smoker detection, OpenCV, artificial Intelligence, X-ray Image detection

Procedia PDF Downloads 78
21501 Employment Problems of Graduands Graduated Form Vocational High Schools

Authors: Refik Uyanöz, Sadife Güngör, Sevilay Konya

Abstract:

The aim of this study is to show the employing ability of vocational students. And also, the employment problems of these students are emphasized in this study.The rapid development in technology and information and increased qualified labor is widely affects labor market. On the other hand, labor market will look for educated, qualified, talented and young people. Because of this reason, qualified staff should be educated at vocational high schools. Vocational high schools are one of the best institutions to educate qualified staff. In this research, the conditions of vocational high schools are studied. The difference between the employment policies and current employment problems are researched.

Keywords: vocational high school, employment, employment problems, vocational students

Procedia PDF Downloads 459
21500 Naïve Bayes: A Classical Approach for the Epileptic Seizures Recognition

Authors: Bhaveek Maini, Sanjay Dhanka, Surita Maini

Abstract:

Electroencephalography (EEG) is used to classify several epileptic seizures worldwide. It is a very crucial task for the neurologist to identify the epileptic seizure with manual EEG analysis, as it takes lots of effort and time. Human error is always at high risk in EEG, as acquiring signals needs manual intervention. Disease diagnosis using machine learning (ML) has continuously been explored since its inception. Moreover, where a large number of datasets have to be analyzed, ML is acting as a boon for doctors. In this research paper, authors proposed two different ML models, i.e., logistic regression (LR) and Naïve Bayes (NB), to predict epileptic seizures based on general parameters. These two techniques are applied to the epileptic seizures recognition dataset, available on the UCI ML repository. The algorithms are implemented on an 80:20 train test ratio (80% for training and 20% for testing), and the performance of the model was validated by 10-fold cross-validation. The proposed study has claimed accuracy of 81.87% and 95.49% for LR and NB, respectively.

Keywords: epileptic seizure recognition, logistic regression, Naïve Bayes, machine learning

Procedia PDF Downloads 54
21499 Static vs. Stream Mining Trajectories Similarity Measures

Authors: Musaab Riyadh, Norwati Mustapha, Dina Riyadh

Abstract:

Trajectory similarity can be defined as the cost of transforming one trajectory into another based on certain similarity method. It is the core of numerous mining tasks such as clustering, classification, and indexing. Various approaches have been suggested to measure similarity based on the geometric and dynamic properties of trajectory, the overlapping between trajectory segments, and the confined area between entire trajectories. In this article, an evaluation of these approaches has been done based on computational cost, usage memory, accuracy, and the amount of data which is needed in advance to determine its suitability to stream mining applications. The evaluation results show that the stream mining applications support similarity methods which have low computational cost and memory, single scan on data, and free of mathematical complexity due to the high-speed generation of data.

Keywords: global distance measure, local distance measure, semantic trajectory, spatial dimension, stream data mining

Procedia PDF Downloads 390
21498 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: deep learning, field programmable gate array, FPGA, hardware accelerator, convolutional neural networks, CNN

Procedia PDF Downloads 123
21497 The Reliability of Management Earnings Forecasts in IPO Prospectuses: A Study of Managers’ Forecasting Preferences

Authors: Maha Hammami, Olfa Benouda Sioud

Abstract:

This study investigates the reliability of management earnings forecasts with reference to these two ingredients: verifiability and neutrality. Specifically, we examine the biasedness (or accuracy) of management earnings forecasts and company specific characteristics that can be associated with accuracy. Based on sample of 102 IPO prospectuses published for admission on NYSE Euronext Paris from 2002 to 2010, we found that these forecasts are on average optimistic and two of the five test variables, earnings variability and financial leverage are significant in explaining ex post bias. Acknowledging the possibility that the bias is the result of the managers’ forecasting behavior, we then examine whether managers decide to under-predict, over-predict or forecast accurately for self-serving purposes. Explicitly, we examine the role of financial distress, operating performance, ownership by insiders and the economy state in influencing managers’ forecasting preferences. We find that managers of distressed firms seem to over-predict future earnings. We also find that when managers are given more stock options, they tend to under-predict future earnings. Finally, we conclude that the management earnings forecasts are affected by an intentional bias due to managers’ forecasting preferences.

Keywords: intentional bias, management earnings forecasts, neutrality, verifiability

Procedia PDF Downloads 231
21496 3D Reconstruction of Human Body Based on Gender Classification

Authors: Jiahe Liu, Hongyang Yu, Feng Qian, Miao Luo

Abstract:

SMPL-X was a powerful parametric human body model that included male, neutral, and female models, with significant gender differences between these three models. During the process of 3D human body reconstruction, the correct selection of standard templates was crucial for obtaining accurate results. To address this issue, we developed an efficient gender classification algorithm to automatically select the appropriate template for 3D human body reconstruction. The key to this gender classification algorithm was the precise analysis of human body features. By using the SMPL-X model, the algorithm could detect and identify gender features of the human body, thereby determining which standard template should be used. The accuracy of this algorithm made the 3D reconstruction process more accurate and reliable, as it could adjust model parameters based on individual gender differences. SMPL-X and the related gender classification algorithm have brought important advancements to the field of 3D human body reconstruction. By accurately selecting standard templates, they have improved the accuracy of reconstruction and have broad potential in various application fields. These technologies continue to drive the development of the 3D reconstruction field, providing us with more realistic and accurate human body models.

Keywords: gender classification, joint detection, SMPL-X, 3D reconstruction

Procedia PDF Downloads 66
21495 Study on the Process of Detumbling Space Target by Laser

Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming

Abstract:

The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.

Keywords: detumbling, laser ablation drive, space target, space debris remove

Procedia PDF Downloads 79
21494 Methodologies for Crack Initiation in Welded Joints Applied to Inspection Planning

Authors: Guang Zou, Kian Banisoleiman, Arturo González

Abstract:

Crack initiation and propagation threatens structural integrity of welded joints and normally inspections are assigned based on crack propagation models. However, the approach based on crack propagation models may not be applicable for some high-quality welded joints, because the initial flaws in them may be so small that it may take long time for the flaws to develop into a detectable size. This raises a concern regarding the inspection planning of high-quality welded joins, as there is no generally acceptable approach for modeling the whole fatigue process that includes the crack initiation period. In order to address the issue, this paper reviews treatment methods for crack initiation period and initial crack size in crack propagation models applied to inspection planning. Generally, there are four approaches, by: 1) Neglecting the crack initiation period and fitting a probabilistic distribution for initial crack size based on statistical data; 2) Extrapolating the crack propagation stage to a very small fictitious initial crack size, so that the whole fatigue process can be modeled by crack propagation models; 3) Assuming a fixed detectable initial crack size and fitting a probabilistic distribution for crack initiation time based on specimen tests; and, 4) Modeling the crack initiation and propagation stage separately using small crack growth theories and Paris law or similar models. The conclusion is that in view of trade-off between accuracy and computation efforts, calibration of a small fictitious initial crack size to S-N curves is the most efficient approach.

Keywords: crack initiation, fatigue reliability, inspection planning, welded joints

Procedia PDF Downloads 350
21493 Geometric Contrast of a 3D Model Obtained by Means of Digital Photogrametry with a Quasimetric Camera on UAV Classical Methods

Authors: Julio Manuel de Luis Ruiz, Javier Sedano Cibrián, Rubén Pérez Álvarez, Raúl Pereda García, Cristina Diego Soroa

Abstract:

Nowadays, the use of drones has been extended to practically any human activity. One of the main applications is focused on the surveying field. In this regard, software programs that process the images captured by the sensor from the drone in an almost automatic way have been developed and commercialized, but they only allow contrasting the results through control points. This work proposes the contrast of a 3D model obtained from a flight developed by a drone and a non-metric camera (due to its low cost), with a second model that is obtained by means of the historically-endorsed classical methods. In addition to this, the contrast is developed over a certain territory with a significant unevenness, so as to test the model generated with photogrammetry, and considering that photogrammetry with drones finds more difficulties in terms of accuracy in this kind of situations. Distances, heights, surfaces and volumes are measured on the basis of the 3D models generated, and the results are contrasted. The differences are about 0.2% for the measurement of distances and heights, 0.3% for surfaces and 0.6% when measuring volumes. Although they are not important, they do not meet the order of magnitude that is presented by salespeople.

Keywords: accuracy, classical topographic, model tridimensional, photogrammetry, Uav.

Procedia PDF Downloads 130