Search results for: high relative accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24272

Search results for: high relative accuracy

23012 Investigating Data Normalization Techniques in Swarm Intelligence Forecasting for Energy Commodity Spot Price

Authors: Yuhanis Yusof, Zuriani Mustaffa, Siti Sakira Kamaruddin

Abstract:

Data mining is a fundamental technique in identifying patterns from large data sets. The extracted facts and patterns contribute in various domains such as marketing, forecasting, and medical. Prior to that, data are consolidated so that the resulting mining process may be more efficient. This study investigates the effect of different data normalization techniques, which are Min-max, Z-score, and decimal scaling, on Swarm-based forecasting models. Recent swarm intelligence algorithms employed includes the Grey Wolf Optimizer (GWO) and Artificial Bee Colony (ABC). Forecasting models are later developed to predict the daily spot price of crude oil and gasoline. Results showed that GWO works better with Z-score normalization technique while ABC produces better accuracy with the Min-Max. Nevertheless, the GWO is more superior that ABC as its model generates the highest accuracy for both crude oil and gasoline price. Such a result indicates that GWO is a promising competitor in the family of swarm intelligence algorithms.

Keywords: artificial bee colony, data normalization, forecasting, Grey Wolf optimizer

Procedia PDF Downloads 478
23011 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 307
23010 GE as a Channel Material in P-Type MOSFETs

Authors: S. Slimani, B. Djellouli

Abstract:

Novel materials and innovative device structures has become necessary for the future of CMOS. High mobility materials like Ge is a very promising material due to its high mobility and is being considered to replace Si in the channel to achieve higher drive currents and switching speeds .Various approaches to circumvent the scaling limits to benchmark the performance of nanoscale MOSFETS with different channel materials, the optimized structure is simulated within nextnano in order to highlight the quantum effects on DG MOSFETs when Si is replaced by Ge and SiO2 is replaced by ZrO2 and HfO2as the gate dielectric. The results have shown that Ge MOSFET have the highest mobility and high permittivity oxides serve to maintain high drive current. The simulations show significant improvements compared with DGMOSFET using SiO2 gate dielectric and Si channel.

Keywords: high mobility, high-k, quantum effects, SOI-DGMOSFET

Procedia PDF Downloads 367
23009 Performance Evaluation of Solid Lubricant Characteristics at Different Sliding Conditions

Authors: Suresh Kumar Reddy Narala, Rakesh Kumar Gunda

Abstract:

In modern industry, mechanical parts are subjected to friction and wear, leading to heat generation, which affects the reliability, life and power consumption of machinery. To overcome the tribological losses due to friction and wear, a significant portion of lubricant with high viscous properties allows very smooth relative motion between two sliding surfaces. Advancement in modern tribology has facilitated the use of applying solid lubricants in various industrial applications. Solid lubricant additives with high viscous thin film formation between the sliding surfaces can adequately wet and adhere to a work surface. In the present investigation, an attempt has been made to investigate and evaluate the tribological studies of various solid lubricants like MoS¬2, graphite, and boric acid at different sliding conditions. The base oil used in this study was SAE 40 oil with a viscosity of 220 cSt at 400C. The tribological properties were measured on pin-on-disc tribometer. An experimental set-up has been developed for effective supply of solid lubricants to the pin-disc interface zone. The results obtained from the experiments show that the friction coefficient increases with increase in applied load for all the considered environments. The tribological properties with MoS2 solid lubricant exhibit larger load carrying capacity than that of graphite and boric acid. The present research work also contributes to the understanding of the behavior of film thickness distribution of solid lubricant using potential contact technique under different sliding conditions. The results presented in this research work are expected to form a scientific basis for selecting the best solid lubricant in various industrial applications for possible minimization of friction and wear.

Keywords: friction, wear, temperature, solid lubricant

Procedia PDF Downloads 348
23008 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 177
23007 Comparative Analysis of Spectral Estimation Methods for Brain-Computer Interfaces

Authors: Rafik Djemili, Hocine Bourouba, M. C. Amara Korba

Abstract:

In this paper, we present a method in order to classify EEG signals for Brain-Computer Interfaces (BCI). EEG signals are first processed by means of spectral estimation methods to derive reliable features before classification step. Spectral estimation methods used are standard periodogram and the periodogram calculated by the Welch method; both methods are compared with Logarithm of Band Power (logBP) features. In the method proposed, we apply Linear Discriminant Analysis (LDA) followed by Support Vector Machine (SVM). Classification accuracy reached could be as high as 85%, which proves the effectiveness of classification of EEG signals based BCI using spectral methods.

Keywords: brain-computer interface, motor imagery, electroencephalogram, linear discriminant analysis, support vector machine

Procedia PDF Downloads 499
23006 A Supervised Approach for Word Sense Disambiguation Based on Arabic Diacritics

Authors: Alaa Alrakaf, Sk. Md. Mizanur Rahman

Abstract:

Since the last two decades’ Arabic natural language processing (ANLP) has become increasingly much more important. One of the key issues related to ANLP is ambiguity. In Arabic language different pronunciation of one word may have a different meaning. Furthermore, ambiguity also has an impact on the effectiveness and efficiency of Machine Translation (MT). The issue of ambiguity has limited the usefulness and accuracy of the translation from Arabic to English. The lack of Arabic resources makes ambiguity problem more complicated. Additionally, the orthographic level of representation cannot specify the exact meaning of the word. This paper looked at the diacritics of Arabic language and used them to disambiguate a word. The proposed approach of word sense disambiguation used Diacritizer application to Diacritize Arabic text then found the most accurate sense of an ambiguous word using Naïve Bayes Classifier. Our Experimental study proves that using Arabic Diacritics with Naïve Bayes Classifier enhances the accuracy of choosing the appropriate sense by 23% and also decreases the ambiguity in machine translation.

Keywords: Arabic natural language processing, machine learning, machine translation, Naive bayes classifier, word sense disambiguation

Procedia PDF Downloads 359
23005 A New Mathematical Method for Heart Attack Forecasting

Authors: Razi Khalafi

Abstract:

Myocardial Infarction (MI) or acute Myocardial Infarction (AMI), commonly known as a heart attack, occurs when blood flow stops to part of the heart causing damage to the heart muscle. An ECG can often show evidence of a previous heart attack or one that's in progress. The patterns on the ECG may indicate which part of your heart has been damaged, as well as the extent of the damage. In chaos theory, the correlation dimension is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension. In this research by considering ECG signal as a random walk we work on forecasting the oncoming heart attack by analysing the ECG signals using the correlation dimension. In order to test the model a set of ECG signals for patients before and after heart attack was used and the strength of model for forecasting the behaviour of these signals were checked. Results show this methodology can forecast the ECG and accordingly heart attack with high accuracy.

Keywords: heart attack, ECG, random walk, correlation dimension, forecasting

Procedia PDF Downloads 507
23004 Resistivity Tomography Optimization Based on Parallel Electrode Linear Back Projection Algorithm

Authors: Yiwei Huang, Chunyu Zhao, Jingjing Ding

Abstract:

Electrical Resistivity Tomography has been widely used in the medicine and the geology, such as the imaging of the lung impedance and the analysis of the soil impedance, etc. Linear Back Projection is the core algorithm of Electrical Resistivity Tomography, but the traditional Linear Back Projection can not make full use of the information of the electric field. In this paper, an imaging method of Parallel Electrode Linear Back Projection for Electrical Resistivity Tomography is proposed, which generates the electric field distribution that is not linearly related to the traditional Linear Back Projection, captures the new information and improves the imaging accuracy without increasing the number of electrodes by changing the connection mode of the electrodes. The simulation results show that the accuracy of the image obtained by the inverse operation obtained by the Parallel Electrode Linear Back Projection can be improved by about 20%.

Keywords: electrical resistivity tomography, finite element simulation, image optimization, parallel electrode linear back projection

Procedia PDF Downloads 154
23003 Impact of Marine Hydrodynamics and Coastal Morphology on Changes in Mangrove Forests (Case Study: West of Strait of Hormuz, Iran)

Authors: Fatemeh Parhizkar, Mojtaba Yamani, Abdolla Behboodi, Masoomeh Hashemi

Abstract:

The mangrove forests are natural and valuable gifts that exist in some parts of the world, including Iran. Regarding the threats faced by these forests and the declining area of them all over the world, as well as in Iran, it is very necessary to manage and monitor them. The current study aimed to investigate the changes in mangrove forests and the relationship between these changes and the marine hydrodynamics and coastal morphology in the area between qeshm island and the west coast of the Hormozgan province (i.e. the coastline between Mehran river and Bandar-e Pol port) in the 49-year period. After preprocessing and classifying satellite images using the SVM, MLC, and ANN classifiers and evaluating the accuracy of the maps, the SVM approach with the highest accuracy (the Kappa coefficient of 0.97 and overall accuracy of 98) was selected for preparing the classification map of all images. The results indicate that from 1972 to 1987, the area of these forests have had experienced a declining trend, and in the next years, their expansion was initiated. These forests include the mangrove forests of Khurkhuran wetland, Muriz Deraz Estuary, Haft Baram Estuary, the mangrove forest in the south of the Laft Port, and the mangrove forests between the Tabl Pier, Maleki Village, and Gevarzin Village. The marine hydrodynamic and geomorphological characteristics of the region, such as average intertidal zone, sediment data, the freshwater inlet of Mehran river, wave stability and calmness, topography and slope, as well as mangrove conservation projects make the further expansion of mangrove forests in this area possible. By providing significant and up-to-date information on the development and decline of mangrove forests in different parts of the coast, this study can significantly contribute to taking measures for the conservation and restoration of mangrove forests.

Keywords: mangrove forests, marine hydrodynamics, coastal morphology, west of strait of Hormuz, Iran

Procedia PDF Downloads 97
23002 Investigation the Polluting Effect of Heavy Elements on Underground Water in Behbahan Plain, South West Zagros

Authors: Zohreh Marbooti, Rezvan Khavari

Abstract:

Groundwater as an essential part of natural resources seems to be an important issue in environmental engineering, so preservation and purification of it can have a critical value for any community. This paper investigates the concentration of elements of Pb, Cd, As, Se. For ground water in Behbahan (a city on south west of Iran), to this purpose a group of 30 wells were studied to examine the concentration of the elements of Pb, Cd, As, Se, and also to determine PH, EC, TDS, temperature and the ions of HCO32-, SO42-, Cl-, Na+, Mg2+, Ca2+, K+ for the wells. Results of the analyses show that the concentration of the elements of Pb, As and, Cd in 33,13,56 percent of the wells respectively and Se in all the samples were greater than normal range of WHO. Since there is a low correlation between Pb and major ions of (HCO32-, SO42-, Cl-, Na+, Mg2+, Ca2+, K+) it can be revealed that Pb overconcentration caused by human contamination. Relative great correlation between Se and the ions showed that Se derived from Gypsum and Dolomit. The big correlation between As and major cations and onions, imply that As can originate from dissolution and liquidation of mineral evaporation in the zone. The high rate of Cadmium concentration in urban sewagewater is due to the small industries, workshops and, mills wastewater.

Keywords: heavy elements, underground water, pollution, waste water

Procedia PDF Downloads 561
23001 Modeling of Geotechnical Data Using GIS and Matlab for Eastern Ahmedabad City, Gujarat

Authors: Rahul Patel, S. P. Dave, M. V Shah

Abstract:

Ahmedabad is a rapidly growing city in western India that is experiencing significant urbanization and industrialization. With projections indicating that it will become a metropolitan city in the near future, various construction activities are taking place, making soil testing a crucial requirement before construction can commence. To achieve this, construction companies and contractors need to periodically conduct soil testing. This study focuses on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical Geo-database involves three essential steps. Firstly, borehole data is collected from reputable sources. Secondly, the accuracy and redundancy of the data are verified. Finally, the geotechnical information is standardized and organized for integration into the database. Once the Geo-database is complete, it is integrated with GIS. This integration allows users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. The GIS map generated by this study enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This approach highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers. The information generated by this study can be utilized by engineers to make informed decisions during construction activities. For instance, they can use the data to optimize foundation designs and improve site selection. In conclusion, the rapid growth experienced by Ahmedabad requires extensive construction activities, necessitating soil testing. This study focused on the process of creating a comprehensive geotechnical database integrated with GIS. The database was developed by collecting borehole data from reputable sources, verifying its accuracy and redundancy, and organizing the information for integration. The GIS map generated by this study is an efficient solution that offers greater accuracy and generates valuable information that can be used as input for correlation analysis. It also serves as a decision support tool for geotechnical engineers, allowing them to make informed decisions during construction activities.

Keywords: arcGIS, borehole data, geographic information system (GIS), geo-database, interpolation, SPT N-value, soil classification, φ-value, bearing capacity

Procedia PDF Downloads 70
23000 Energy Consumption Forecast Procedure for an Industrial Facility

Authors: Tatyana Aleksandrovna Barbasova, Lev Sergeevich Kazarinov, Olga Valerevna Kolesnikova, Aleksandra Aleksandrovna Filimonova

Abstract:

We regard forecasting of energy consumption by private production areas of a large industrial facility as well as by the facility itself. As for production areas the forecast is made based on empirical dependencies of the specific energy consumption and the production output. As for the facility itself implementation of the task to minimize the energy consumption forecasting error is based on adjustment of the facility’s actual energy consumption values evaluated with the metering device and the total design energy consumption of separate production areas of the facility. The suggested procedure of optimal energy consumption was tested based on the actual data of core product output and energy consumption by a group of workshops and power plants of the large iron and steel facility. Test results show that implementation of this procedure gives the mean accuracy of energy consumption forecasting for winter 2014 of 0.11% for the group of workshops and 0.137% for the power plants.

Keywords: energy consumption, energy consumption forecasting error, energy efficiency, forecasting accuracy, forecasting

Procedia PDF Downloads 446
22999 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 171
22998 Road Vehicle Recognition Using Magnetic Sensing Feature Extraction and Classification

Authors: Xiao Chen, Xiaoying Kong, Min Xu

Abstract:

This paper presents a road vehicle detection approach for the intelligent transportation system. This approach mainly uses low-cost magnetic sensor and associated data collection system to collect magnetic signals. This system can measure the magnetic field changing, and it also can detect and count vehicles. We extend Mel Frequency Cepstral Coefficients to analyze vehicle magnetic signals. Vehicle type features are extracted using representation of cepstrum, frame energy, and gap cepstrum of magnetic signals. We design a 2-dimensional map algorithm using Vector Quantization to classify vehicle magnetic features to four typical types of vehicles in Australian suburbs: sedan, VAN, truck, and bus. Experiments results show that our approach achieves a high level of accuracy for vehicle detection and classification.

Keywords: vehicle classification, signal processing, road traffic model, magnetic sensing

Procedia PDF Downloads 320
22997 Incentive Policies to Promote Green Infrastructure in Urban Jordan

Authors: Zayed Freah Zeadat

Abstract:

The wellbeing of urban dwellers is strongly associated with the quality and quantity of green infrastructure. Nevertheless, urban green infrastructure is still lagging in many Arab cities, and Jordan is no exception. The capital city of Jordan, Amman, is becoming more urban dense with limited green spaces. The unplanned urban growth in Amman has caused several environmental problems such as urban heat islands, air pollution, and lack of green spaces. This study aims to investigate the most suitable drivers to leverage the implementation of urban green infrastructure in Jordan through qualitative and quantitative analysis. The qualitative research includes an extensive literature review to discuss the most common drivers used internationally to promote urban green infrastructure implementation in the literature. The quantitative study employs a questionnaire survey to rank the suitability of each driver. Consultants, contractors, and policymakers were invited to fill the research questionnaire according to their judgments and opinions. Relative Importance Index has been used to calculate the weighted average of all drivers and the Kruskal-Wallis test to check the degree of agreement among groups. This study finds that research participants agreed that indirect financial incentives (i.e., tax reductions, reduction in stormwater utility fee, reduction of interest rate, density bonus, etc.) are the most effective incentive policy whilst granting sustainability certificate policy is the least effective driver to ensure widespread of UGI is elements in Jordan.

Keywords: urban green infrastructure, relative importance index, sustainable urban development, urban Jordan

Procedia PDF Downloads 155
22996 Temporal Effects on Chemical Composition of Treated Wastewater and Borehole Water Used for Irrigation in Limpopo Province, South Africa

Authors: Pholosho M. Kgopa, Phatu W. Mashela, Alen Manyevere

Abstract:

Increasing incidents of drought spells in most Sub-Saharan Africa call for using alternative sources of water for irrigation in arid and semi-arid regions. A study was conducted to investigate chemical composition of borehole and treated wastewater from different sampling disposal sites at University of Limpopo Experimental Farm (ULEF). A 4 × 5 factorial experiment, with the borehole as a reference sampling site and three other sampling sites along the wastewater disposal system was conducted over five months. Water samples were collected at four sites namely, (a) exit from Pond 16 into the furrow, (b) entry into night-dam, (c) exit from night dam to irrigated fields and (d) exit from borehole to irrigated fields. Water samples were collected in the middle of each month, starting from July to November 2016. Samples were analysed for pH, EC, Ca, Mg, Na, K, Al, B, Zn, Cu, Cr, Pb, Cd and As. The site × time interactions were highly significant for Ca, Mg, Zn, Cu, Cr, Pb, Cd, and As variables, but not for Na and K. Sampling site was highly significant on all variables, with sampling period not significant for K and Na. Relative to water from the borehole, Na concentration in wastewater samples from the night-dam exit, night-dam entry and Pond16 exit were lower by 69, 34 and 55%, respectively. Relative to borehole water, Al was higher in wastewater sampling sites. In conclusion, both sampling site and period affected the chemical composition of treated wastewater.

Keywords: irrigation water quality, spatial effects, temporal effects, water reuse, water scarcity

Procedia PDF Downloads 239
22995 Identity Verification Using k-NN Classifiers and Autistic Genetic Data

Authors: Fuad M. Alkoot

Abstract:

DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN). 

Keywords: biometrics, genetic data, identity verification, k nearest neighbor

Procedia PDF Downloads 258
22994 High Performance Computing and Big Data Analytics

Authors: Branci Sarra, Branci Saadia

Abstract:

Because of the multiplied data growth, many computer science tools have been developed to process and analyze these Big Data. High-performance computing architectures have been designed to meet the treatment needs of Big Data (view transaction processing standpoint, strategic, and tactical analytics). The purpose of this article is to provide a historical and global perspective on the recent trend of high-performance computing architectures especially what has a relation with Analytics and Data Mining.

Keywords: high performance computing, HPC, big data, data analysis

Procedia PDF Downloads 521
22993 Reducing the Chemical Activity of Ceramic Casting Molds for Producing Decorated Glass Moulds

Authors: Nilgun Kuskonmaz

Abstract:

Ceramic molding can produce castings with fine detail, smooth surface and high degree of dimensional accuracy. All these features are the key factors for producing decorated glass moulds. In the ceramic mold casting process, the fundamental parameters affecting the mold-metal reactions are the composition and the properties of the refractory materials used in the production of ceramic mold. As a result of the reactions taking place between the liquid metal and mold surface, it is not possible to achieve a perfect surface quality, a fine surface detail and maintain a high standard dimensional tolerances. The present research examines the effects of the binder composition on the structural and physical properties of the zircon ceramic mold. In the experiment, the ceramic slurry was prepared by mixing the refractory powders (zircon(ZrSiO4), mullit(3Al2O32SiO2) and alumina (Al2O3)) with the low alkaline silica (ethyl silicate (C8H20O4Si)) and acidic type gelling material suitable binder and gelling agent. This was followed by pouring that ceramic slurry on to a silicon pattern. After being gelled, the mold was removed from the silicon pattern and dried. Then, the ceramic mold was subjected to the reaction sintering at 1600°C for 2 hours in the furnace. The stainless steel (SS) was cast into the sintered ceramic mold. At the end of this process it was observed that the surface quality of decorated glass mold.

Keywords: ceramic mold, stainless steel casting, decorated glass mold

Procedia PDF Downloads 263
22992 Predictors of Academic Dishonesty among Serially Frustrated Students in Ogun State, Southwest, Nigeria

Authors: Oyesoji Aremu, Taiwo Williams

Abstract:

This study examined some factors (academic self-efficacy, locus of control, motivation and gender) that could predict academic dishonesty among serially frustrated students in Ogun State, South West, Nigeria. Serial academically frustrated students are students who are unable to attain and meet academic expectations set by themselves or significant others. A sample of 250 undergraduate students selected from two faculties from a University in Ogun State,South West Nigeria took part in the study. Multiple regression analysis was employed to determine the joint and relative contributions of the independent variables to the prediction of the dependent variable. T-test was used to test the hypothesis determining the gender difference between the independent variables (academic self-efficacy, locus of control and motivation) and academic dishonesty of serial academically frustrated male and female students. The results of the study showed all the independent variables jointly contributed to predicting academic dishonesty, while only academic self-efficacy and motivation had relative contributions to the dependent measure. There was no significant difference in the academic self-efficacy and motivation among males and females on academic dishonesty of the serial academically frustrated students but locus of control showed a significant difference between male and female students on academic dishonesty. Implications for counseling of the findings are discussed in the study.

Keywords: academic dishonesty, serially frustrated students, academic self-efficacy, locus of control

Procedia PDF Downloads 255
22991 Effects of Storage Methods on Proximate Compositions of African Yam Bean (Sphenostylis stenocarpa) Seeds

Authors: Iyabode A. Kehinde, Temitope A. Oyedele, Clement G. Afolabi

Abstract:

One of the limitations of African yam bean (AYB) (Sphenostylis sternocarpa) is poor storage ability due to the adverse effect of seed-borne fungi. This study was conducted to examine the effects of storage methods on the nutritive composition of AYB seeds stored in three types of storage materials viz; Jute bags, Polypropylene bags, and Plastic Bowls. Freshly harvested seeds of AYB seeds were stored in all the storage materials for 6 months using 2 × 3 factorial (2 AYB cultivars and 3 storage methods) in 3 replicates. The proximate analysis of the stored AYB seeds was carried out at 3 and 6 months after storage using standard methods. The temperature and relative humidity of the storeroom was recorded monthly with Kestrel pocket weather tracker 4000. Seeds stored in jute bags gave the best values for crude protein (24.87%), ash (5.69%) and fat content (6.64%) but recorded least values for crude fibre (2.55%), carbohydrate (50.86%) and moisture content (12.68%) at the 6th month of storage. The temperature of the storeroom decreased from 32.9ºC - 28.3ºC, while the relative humidity increased from 78% - 86%. Decreased incidence of field fungi namely: Rhizopus oryzae, Aspergillus flavus, Geotricum candidum, Aspergillus fumigatus and Mucor meihei was accompanied by the increase in storage fungi viz: Apergillus niger, Mucor hiemalis, Penicillium espansum and Penicillium atrovenetum with prolonged storage. The study showed that of the three storage materials jute bag was more effective at preserving AYB seeds.

Keywords: storage methods, proximate composition, African Yam Bean, fungi

Procedia PDF Downloads 136
22990 Control of Base Isolated Benchmark using Combined Control Strategy with Fuzzy Algorithm Subjected to Near-Field Earthquakes

Authors: Hashem Shariatmadar, Mozhgansadat Momtazdargahi

Abstract:

The purpose of control structure against earthquake is to dissipate earthquake input energy to the structure and reduce the plastic deformation of structural members. There are different methods for control structure against earthquake to reduce the structure response that they are active, semi-active, inactive and hybrid. In this paper two different combined control systems are used first system comprises base isolator and multi tuned mass dampers (BI & MTMD) and another combination is hybrid base isolator and multi tuned mass dampers (HBI & MTMD) for controlling an eight story isolated benchmark steel structure. Active control force of hybrid isolator is estimated by fuzzy logic algorithms. The influences of the combined systems on the responses of the benchmark structure under the two near-field earthquake (Newhall & Elcentro) are evaluated by nonlinear dynamic time history analysis. Applications of combined control systems consisting of passive or active systems installed in parallel to base-isolation bearings have the capability of reducing response quantities of base-isolated (relative and absolute displacement) structures significantly. Therefore in design and control of irregular isolated structures using the proposed control systems, structural demands (relative and absolute displacement and etc.) in each direction must be considered separately.

Keywords: base-isolated benchmark structure, multi-tuned mass dampers, hybrid isolators, near-field earthquake, fuzzy algorithm

Procedia PDF Downloads 305
22989 Development of Multi-Leaf Collimator-Based Isocenter Verification Tool Using Electrical Portal Imaging Device for Stereotactic Radiosurgery

Authors: Panatda Intanin, Sangutid Thongsawad, Chirapha Tannanonta, Todsaporn Fuangrod

Abstract:

Stereotactic radiosurgery (SRS) is a highly precision delivery technique that requires comprehensive quality assurance (QA) tests prior to treatment delivery. An isocenter of delivery beam plays a critical role that affect the treatment accuracy. The uncertainty of isocenter is traditionally accessed using circular cone equipment, Winston-Lutz (WL) phantom and film. This technique is considered time consuming and highly dependent on the observer. In this work, the development of multileaf collimator (MLC)-based isocenter verification tool using electronic portal imaging device (EPID) was proposed and evaluated. A mechanical isocenter alignment with ball bearing diameter 5 mm and circular cone diameter 10 mm fixed to gantry head defines the radiation field was set as the conventional WL test method. The conventional setup was to compare to the proposed setup; using MLC (10 x 10 mm) to define the radiation filed instead of cone. This represents more realistic delivery field than using circular cone equipment. The acquisition from electronic portal imaging device (EPID) and radiographic film were performed in both experiments. The gantry angles were set as following: 0°, 90°, 180° and 270°. A software tool was in-house developed using MATLAB/SIMULINK programming to determine the centroid of radiation field and shadow of WL phantom automatically. This presents higher accuracy than manual measurement. The deviation between centroid of both cone-based and MLC-based WL tests were quantified. To compare between film and EPID image, the deviation for all gantry angle was 0.26±0.19mm and 0.43±0.30 for cone-based and MLC-based WL tests. For the absolute deviation calculation on EPID images between cone and MLC-based WL test was 0.59±0.28 mm and the absolute deviation on film images was 0.14±0.13 mm. Therefore, the MLC-based isocenter verification using EPID present high sensitivity tool for SRS QA.

Keywords: isocenter verification, quality assurance, EPID, SRS

Procedia PDF Downloads 154
22988 Risk of Mortality and Spectrum of Second Primary Malignancies in Mantle Cell Lymphoma before and after Ibrutinib Approval: A Population-Based Study

Authors: Karthik Chamari, Vasudha Rudraraju, Gaurav Chaudhari

Abstract:

Background: Mantle cell lymphoma (MCL) is one of the mature B cell non-Hodgkin lymphomas (NHL). The course of MCL is moderately aggressive and variable, and it has median overall survival of 8 to 10 years. Ibrutinib, a Bruton’s tyrosine kinase inhibitor, was approved by the United States (US) Food and Drug Administration in November of 2013 for the treatment of MCL patients who have received at least one prior therapy. In this study, we aimed to evaluate whether there has been a change in survival and patterns of second primary malignancies (SPMs) among the MCL population in the US after ibrutinib approval. Methods: Using the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER)-18, we conducted a retrospective study with patients diagnosed with MCL (ICD-0-3 code 9673/3) between 2007 and 2018. We divided patients into two six-year cohorts, pre-ibrutinib approval (2007-2012) and post-ibrutinib approval (2013-2018), and compared relative survival rates (RSRs) and standardized incidence ratios (SIRs) of SPMs between cohorts. Results: We included 9,257 patients diagnosed with MCL between 2007 and 2018 in the SEER-18 survival and SIR registries. Of these, 4,205 (45%) patients were included in the pre-ibrutinib cohort, and 5052 (55%) patients were included in the post-ibrutinib cohort. The median follow-up duration for the pre-ibrutinib cohort was 54 months (range 0 to 143 months), and the post-ibrutinib cohort was 20 months (range 0 to 71 months). There was a significant difference in the five-year RSRs between pre-ibrutinib and post-ibrutinib cohorts (57.5% vs. 62.6%, p < 0.005). Out of the 9,257 patients diagnosed with MCL, 920 developed SPMs. A higher proportion of SPMs occurred in the post-ibrutinib cohort (63%) when compared with the pre-ibrutinib cohort (37%). Non-hematological malignancies comprised most of all SPMs. A higher incidence of non-hematological malignancies occurred in the post-ibrutinib cohort (SIR 1.42, 95% CI 1.29 to 1.56) when compared with the pre-ibrutinib cohort (SIR 1.14, 95% CI 1 to 1.3). There was a statistically significant increase in the incidence of cancers of the respiratory tract (SIR 1.77, 95% CI 1.43 to 2.18), urinary tract (SIR 1.61, 95% CI 1.23 to 2.06) when compared with other non-hematological malignancies in post-ibrutinib cohort. Conclusions: Our study results suggest the relative survival rates have increased since the approval of ibrutinib for mantle cell lymphoma patients. Additionally, for some unclear reasons, the incidence of SPM’s (non-hematological malignancies), mainly cancers of the respiratory tract, urinary tract, have increased in the six years following the approval of ibrutinib. Further studies should be conducted to determine the cause of these findings.

Keywords: mantle cell lymphoma, Ibrutinib, relative survival analysis, secondary primary cancers

Procedia PDF Downloads 185
22987 Hybrid Anomaly Detection Using Decision Tree and Support Vector Machine

Authors: Elham Serkani, Hossein Gharaee Garakani, Naser Mohammadzadeh, Elaheh Vaezpour

Abstract:

Intrusion detection systems (IDS) are the main components of network security. These systems analyze the network events for intrusion detection. The design of an IDS is through the training of normal traffic data or attack. The methods of machine learning are the best ways to design IDSs. In the method presented in this article, the pruning algorithm of C5.0 decision tree is being used to reduce the features of traffic data used and training IDS by the least square vector algorithm (LS-SVM). Then, the remaining features are arranged according to the predictor importance criterion. The least important features are eliminated in the order. The remaining features of this stage, which have created the highest level of accuracy in LS-SVM, are selected as the final features. The features obtained, compared to other similar articles which have examined the selected features in the least squared support vector machine model, are better in the accuracy, true positive rate, and false positive. The results are tested by the UNSW-NB15 dataset.

Keywords: decision tree, feature selection, intrusion detection system, support vector machine

Procedia PDF Downloads 266
22986 Project Progress Prediction in Software Devlopment Integrating Time Prediction Algorithms and Large Language Modeling

Authors: Dong Wu, Michael Grenn

Abstract:

Managing software projects effectively is crucial for meeting deadlines, ensuring quality, and managing resources well. Traditional methods often struggle with predicting project timelines accurately due to uncertain schedules and complex data. This study addresses these challenges by combining time prediction algorithms with Large Language Models (LLMs). It makes use of real-world software project data to construct and validate a model. The model takes detailed project progress data such as task completion dynamic, team Interaction and development metrics as its input and outputs predictions of project timelines. To evaluate the effectiveness of this model, a comprehensive methodology is employed, involving simulations and practical applications in a variety of real-world software project scenarios. This multifaceted evaluation strategy is designed to validate the model's significant role in enhancing forecast accuracy and elevating overall management efficiency, particularly in complex software project environments. The results indicate that the integration of time prediction algorithms with LLMs has the potential to optimize software project progress management. These quantitative results suggest the effectiveness of the method in practical applications. In conclusion, this study demonstrates that integrating time prediction algorithms with LLMs can significantly improve the predictive accuracy and efficiency of software project management. This offers an advanced project management tool for the industry, with the potential to improve operational efficiency, optimize resource allocation, and ensure timely project completion.

Keywords: software project management, time prediction algorithms, large language models (LLMS), forecast accuracy, project progress prediction

Procedia PDF Downloads 80
22985 Identification of Factors Affecting Labor Productivity in Construction Projects of Iran

Authors: Elham Dehghan, A. Shirzadi Javid, Mohsen Tadayon

Abstract:

Labor productivity is very important and gained special concerns among professionals in the construction industry, worldwide. Productivity improvements on labors achieve higher cost savings with minimal investment. Due to the fact that profit margins are small on construction projects, cost savings associated with productivity are crucial to become a successful contractor. This research program studies and highlights the factors affecting labor productivity in Iranian construction industry. A questionnaire was used to gather the relevant data from respondents who involve in managing various types of projects in wide areas in Iran. It involved ranking 57 predefined factors divided into 5 categories: Human/Labor; Financial; Management; Equipments/Materials and Environmental. Total 62 feedbacks were analyzed through the Relative Importance Index (RII) technique. The top ten factors affecting construction labor productivity in Iran are: 1) Professional capability of contractor project manager, 2) skills of contractor’s project management team, 3) professional capability of owner project manager, 4) professional capability of Consulting Project manager, 5) discipline working, 6) delay payments by the owner, 7) material shortages, 8) delays in delivery of materials, 9) turnover power of the owner, 10) poor site management. Recommendations have been made in the study to address these factors. The research has direct benefits to key stakeholders in Iranian construction industry.

Keywords: Iranian construction projects, labor, productivity, relative importance index

Procedia PDF Downloads 264
22984 Study on Accurate Calculation Method of Model Attidude on Wind Tunnel Test

Authors: Jinjun Jiang, Lianzhong Chen, Rui Xu

Abstract:

The accurate of model attitude angel plays an important role on the aerodynamic test results in the wind tunnel test. The original method applies the spherical coordinate system transformation to obtain attitude angel calculation.The model attitude angel is obtained by coordinate transformation and spherical surface mapping applying the nominal attitude angel (the balance attitude angel in the wind tunnel coordinate system) indicated by the mechanism. First, the coordinate transformation of this method is not only complex but also difficult to establish the transformed relationship between the space coordinate systems especially after many steps of coordinate transformation, moreover it cannot realize the iterative calculation of the interference relationship between attitude angels; Second, during the calculate process to solve the problem the arc is approximately used to replace the straight line, the angel for the tangent value, and the inverse trigonometric function is applied. Therefore, in the calculation of attitude angel, the process is complex and inaccurate, which can be solved approximately when calculating small attack angel. However, with the advancing development of modern aerodynamic unsteady research, the aircraft tends to develop high or super large attack angel and unsteadyresearch field.According to engineering practice and vector theory, the concept of vector angel coordinate systemis proposed for the first time, and the vector angel coordinate system of attitude angel is established.With the iterative correction calculation and avoiding the problem of approximate and inverse trigonometric function solution, the model attitude calculation process is carried out in detail, which validates that the calculation accuracy and accuracy of model attitude angels are improved.Based on engineering and theoretical methods, a vector angel coordinate systemis established for the first time, which gives the transformation and angel definition relations between different flight attitude coordinate systems, that can accurately calculate the attitude angel of the corresponding coordinate systemand determine its direction, especially in the channel coupling calculation, the calculation of the attitude angel between the coordinate systems is only related to the angel, and has nothing to do with the change order s of the coordinate system, whichsimplifies the calculation process.

Keywords: attitude angel, angel vector coordinate system, iterative calculation, spherical coordinate system, wind tunnel test

Procedia PDF Downloads 150
22983 Perception of TQM Implementation and Perceived Cost of Poor Quality: A Case Study of Local Automotive Company’s Supplier

Authors: Fakhruddin Esa, Yusri Yusof

Abstract:

The confirmatory of Total Quality Management (TQM) implementation is most vital in quality management. This paper focuses on employees' perceptions towards TQM implementation in a local automotive company supplier. The objectives of this study are first and foremost to determine the perception of TQM implementation among the staff, and secondly to ascertain the correlation between the variables, and lastly to identify the relative influence of the 10 TQM variables on the cost of poor quality (COPQ). The TQM implementation is perceived to be moderate. All correlation is found to be significant and five variables having positively moderate to high correlation. Out of 10 variables, quality system improvement, reward and recognition and customer focus influence the perceived COPQ. This study extended a discussion on these three variables contribution to TQM in general and the human resource development in the organization. A significant recommendation to lowering costs of internal error, such as trouble shooting and scraps are also discussed. Certain components of further research that would add value to this study have also been suggested and perhaps could be implemented at policy-level initiatives.

Keywords: cost of poor quality (COPQ), correlation, total quality management (TQM), variables

Procedia PDF Downloads 219