Search results for: one side class algorithm
6110 Antigastritic Effect of Starch from Manihot utilissima on Male Wistar Rats Induced Aspirin
Authors: Naela Nabiela, Ahmad Hilmi Fahmi, M. Sukron, Ayu Elita Sari, Yusran, Suparmi
Abstract:
Aspirin is one of NSAIDs (non-steroid inflammatory drugs), can cause gastric ulcer as an side effect of prolonged consumption. The effort to prevent the increase of gastric HCl level can by treating with amylopectin was reported that can cover the gastric mucose. However, the effect of amylopectin in starch from Manihot utilissima which is believed as traditional treatment gastric ulcer have not been clear yet. This study was conducted to determine the effect of starch formed as syrup to HCl level and gastric histopatology. This experiment post test only control group design used 42 male wistar rats divided into 7 groups. All groups, except first group, were induced by 60 mg/100gBW/day aspirin for 3 days. The following day for 2 days each group was treated by starch syrup at dosed 0.45% w/v, 0.9% w/v, 1.8% w/v, 0% w/v, and sucralfate. Respectively, HCl level were measured by acidi-alkalimetri titration method, while the gastric histopathology were prepared by hematoxylin-eosin staining. The result shows that aspirin induction can increase the HCl level as 0,00767 N. Starch syrup at dose 1.8% w/v was effective to reduce HCl level and the grade of second gastric necrosis. It can be conclude that starch syrup is potention as a treatment to cure gastric ulcer caused by NSAIDs side effect.Keywords: concentration of HCl stomach, gastric histopathology, gastritis, starch
Procedia PDF Downloads 4716109 Design and Development of an Algorithm to Predict Fluctuations of Currency Rates
Authors: Nuwan Kuruwitaarachchi, M. K. M. Peiris, C. N. Madawala, K. M. A. R. Perera, V. U. N Perera
Abstract:
Dealing with businesses with the foreign market always took a special place in a country’s economy. Political and social factors came into play making currency rate changes fluctuate rapidly. Currency rate prediction has become an important factor for larger international businesses since large amounts of money exchanged between countries. This research focuses on comparing the accuracy of mainly three models; Autoregressive Integrated Moving Average (ARIMA), Artificial Neural Networks(ANN) and Support Vector Machines(SVM). series of data import, export, USD currency exchange rate respect to LKR has been selected for training using above mentioned algorithms. After training the data set and comparing each algorithm, it was able to see that prediction in SVM performed better than other models. It was improved more by combining SVM and SVR models together.Keywords: ARIMA, ANN, FFNN, RMSE, SVM, SVR
Procedia PDF Downloads 2166108 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 3006107 Using ε Value in Describe Regular Languages by Using Finite Automata, Operation on Languages and the Changing Algorithm Implementation
Authors: Abdulmajid Mukhtar Afat
Abstract:
This paper aims at introducing nondeterministic finite automata with ε value which is used to perform some operations on languages. a program is created to implement the algorithm that converts nondeterministic finite automata with ε value (ε-NFA) to deterministic finite automata (DFA).The program is written in c++ programming language. The program inputs are FA 5-tuples from text file and then classifies it into either DFA/NFA or ε -NFA. For DFA, the program will get the string w and decide whether it is accepted or rejected. The tracking path for an accepted string is saved by the program. In case of NFA or ε-NFA automation, the program changes the automation to DFA to enable tracking and to decide if the string w exists in the regular language or not.Keywords: DFA, NFA, ε-NFA, eclose, finite automata, operations on languages
Procedia PDF Downloads 4936106 Probability-Based Damage Detection of Structures Using Kriging Surrogates and Enhanced Ideal Gas Molecular Movement Algorithm
Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee
Abstract:
Surrogate model has received increasing attention for use in detecting damage of structures based on vibration modal parameters. However, uncertainties existing in the measured vibration data may lead to false or unreliable output result from such model. In this study, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The kriging technique allows one to genuinely quantify the surrogate error, therefore it is chosen as metamodeling technique. Enhanced version of ideal gas molecular movement (EIGMM) algorithm is used as main algorithm for model updating. The developed approach is applied to detect simulated damage in numerical models of 72-bar space truss and 120-bar dome truss. The simulation results show the proposed method can perform well in probability-based damage detection of structures with less computational effort compared to direct finite element model.Keywords: probability-based damage detection (PBDD), Kriging, surrogate modeling, uncertainty quantification, artificial intelligence, enhanced ideal gas molecular movement (EIGMM)
Procedia PDF Downloads 2456105 The Impact of Scaffolding on Motivation of Vocational Special Education Students in Kakamega Program for Persons with Hearing Impaired in Kenya
Authors: J. W. Mbogani, B. A. Bunyasi
Abstract:
The special skills for five students in the vocational class in Kakamega program for Hearing impaired were identified within one term period of the Kenyan education system. Three students were identified as having a liking for tailoring. The remaining two students did not show any interest in any vocational subject. The three students were attached to two professionals in practicing general tailors within the school vicinity for scaffolding purposes. The students were allowed to attend general classes under the normal curriculum and were withdrawn after eleven in the morning for tailoring classes. The students were then monitored with the guideline of a checklist. The purpose of monitoring was to establish whether the behavior of the students reflected a motivated student. It was established that two of them improved in their school attendance in terms of regularity, punctuality and responsibility accomplishment. The third student ended up attending only tailoring classes. The socialization aspect of the two students improved a lot. They also tended to identify more with the teachers than their fellow students. We recommend that learners with special needs in education should be subjected to the normal curriculum. They may benefit more and attain a skill that could help them economically. Further study should also be done to in several institutions involving learners in other classes.Keywords: general tailoring, scaffolding, term, vocational class
Procedia PDF Downloads 1466104 Uplifting Citizens Participation: A Gov 2.0 Framework
Authors: Mohammed Aladalah
Abstract:
The emergence of digital citizens is no longer mere speculation; therefore, governments’ use of Web 2.0 tools (hereafter Gov 2.0) should be a part of all current and future e-government plans. The potential of Gov 2.0 to facilitate greater communication, participation, and collaboration with citizens has been highlighted and discussed extensively in recent literature. However, the current levels of citizens’ participation in Gov 2.0 have not lived up to the hype. Therefore, governments need to rethink the way in which they implement Gov 2.0, and take advantage of the digitally-engaged population. We propose a two-dimensional framework to tackle this issue: first, on the supply side, governments tend to use Gov 2.0 mainly for the dissemination of information and for self-promotion without the desire to encourage any interaction with citizens; this is due to many reasons, including the lack of time and the possibility of loss of control. The second dimension of the framework is the demand side; citizens are unwilling to participate in Gov 2.0 activities because they do not perceive its value or trust the government. We attempt to consider the elements of both supply and demand in order to provide a comprehensive solution whereby the potential of Gov 2.0 can be fully utilized. Our framework is based on the theoretical foundation of service science and value co-creation theory. This paper makes two significant contributions: (a) it provides an initial framework intended to increase citizens’ participation in Gov 2.0; and (b) it enhances the understanding of the government’s Gov 2.0 applications, particularly in terms of factors that ensure their attractiveness for citizens. This work is the first step in a comprehensive research undertaking, the purpose of which is to study public’s engagement with the Gov 2.0 concept. It contributes to providing a better understanding of e-government and its future.Keywords: e-government, Gov 2.0, citizens participation, digital citizen
Procedia PDF Downloads 3396103 Optimization of Water Pipeline Routes Using a GIS-Based Multi-Criteria Decision Analysis and a Geometric Search Algorithm
Authors: Leon Mortari
Abstract:
The Metropolitan East region of Rio de Janeiro state, Brazil, faces a historic water scarcity. Among the alternatives studied to solve this situation, the possibility of adduction of the available water in the reservoir Lagoa de Juturnaíba to supply the region's municipalities stands out. The allocation of a linear engineering project must occur through an evaluation of different aspects, such as altitude, slope, proximity to roads, distance from watercourses, land use and occupation, and physical and chemical features of the soil. This work aims to apply a multi-criteria model that combines geoprocessing techniques, decision-making, and geometric search algorithm to optimize a hypothetical adductor system in the scenario of expanding the water supply system that serves this region, known as Imunana-Laranjal, using the Lagoa de Juturnaíba as the source. It is proposed in this study, the construction of a spatial database related to the presented evaluation criteria, treatment and rasterization of these data, and standardization and reclassification of this information in a Geographic Information System (GIS) platform. The methodology involves the integrated analysis of these criteria, using their relative importance defined by weighting them based on expert consultations and the Analytic Hierarchy Process (AHP) method. Three approaches are defined for weighting the criteria by AHP: the first treats all criteria as equally important, the second considers weighting based on a pairwise comparison matrix, and the third establishes a hierarchy based on the priority of the criteria. For each approach, a distinct group of weightings is defined. In the next step, map algebra tools are used to overlay the layers and generate cost surfaces, that indicates the resistance to the passage of the adductor route, using the three groups of weightings. The Dijkstra algorithm, a geometric search algorithm, is then applied to these cost surfaces to find an optimized path within the geographical space, aiming to minimize resources, time, investment, maintenance, and environmental and social impacts.Keywords: geometric search algorithm, GIS, pipeline, route optimization, spatial multi-criteria analysis model
Procedia PDF Downloads 396102 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method
Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari
Abstract:
The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.Keywords: optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization
Procedia PDF Downloads 3716101 Prediction of Bariatric Surgery Publications by Using Different Machine Learning Algorithms
Authors: Senol Dogan, Gunay Karli
Abstract:
Identification of relevant publications based on a Medline query is time-consuming and error-prone. An all based process has the potential to solve this problem without any manual work. To the best of our knowledge, our study is the first to investigate the ability of machine learning to identify relevant articles accurately. 5 different machine learning algorithms were tested using 23 predictors based on several metadata fields attached to publications. We find that the Boosted model is the best-performing algorithm and its overall accuracy is 96%. In addition, specificity and sensitivity of the algorithm is 97 and 93%, respectively. As a result of the work, we understood that we can apply the same procedure to understand cancer gene expression big data.Keywords: prediction of publications, machine learning, algorithms, bariatric surgery, comparison of algorithms, boosted, tree, logistic regression, ANN model
Procedia PDF Downloads 2116100 Cross-Comparison between Land Surface Temperature from Polar and Geostationary Satellite over Heterogenous Landscape: A Case Study in Hong Kong
Authors: Ibrahim A. Adeniran, Rui F. Zhu, Man S. Wong
Abstract:
Owing to the insufficiency in the spatial representativeness and continuity of in situ temperature measurements from weather stations (WS), the use of temperature measurement from WS for large-range diurnal analysis in heterogenous landscapes has been limited. This has made the accurate estimation of land surface temperature (LST) from remotely sensed data more crucial. Moreover, the study of dynamic interaction between the atmosphere and the physical surface of the Earth could be enhanced at both annual and diurnal scales by using optimal LST data derived from satellite sensors. The tradeoff between the spatial and temporal resolution of LSTs from satellite’s thermal infrared sensors (TIRS) has, however, been a major challenge, especially when high spatiotemporal LST data are recommended. It is well-known from existing literature that polar satellites have the advantage of high spatial resolution, while geostationary satellites have a high temporal resolution. Hence, this study is aimed at designing a framework for the cross-comparison of LST data from polar and geostationary satellites in a heterogeneous landscape. This could help to understand the relationship between the LST estimates from the two satellites and, consequently, their integration in diurnal LST analysis. Landsat-8 satellite data will be used as the representative of the polar satellite due to the availability of its long-term series, while the Himawari-8 satellite will be used as the data source for the geostationary satellite because of its improved TIRS. For the study area, Hong Kong Special Administrative Region (HK SAR) will be selected; this is due to the heterogeneity in the landscape of the region. LST data will be retrieved from both satellites using the Split window algorithm (SWA), and the resulting data will be validated by comparing satellite-derived LST data with temperature data from automatic WS in HK SAR. The LST data from the satellite data will then be separated based on the land use classification in HK SAR using the Global Land Cover by National Mapping Organization version3 (GLCNMO 2013) data. The relationship between LST data from Landsat-8 and Himawari-8 will then be investigated based on the land-use class and over different seasons of the year in order to account for seasonal variation in their relationship. The resulting relationship will be spatially and statistically analyzed and graphically visualized for detailed interpretation. Findings from this study will reveal the relationship between the two satellite data based on the land use classification within the study area and the seasons of the year. While the information provided by this study will help in the optimal combination of LST data from Polar (Landsat-8) and geostationary (Himawari-8) satellites, it will also serve as a roadmap in the annual and diurnal urban heat (UHI) analysis in Hong Kong SAR.Keywords: automatic weather station, Himawari-8, Landsat-8, land surface temperature, land use classification, split window algorithm, urban heat island
Procedia PDF Downloads 816099 Formal Group Laws and Toposes in Gauge Theory
Authors: Patrascu Andrei Tudor
Abstract:
One of the main problems in high energy physics is the fact that we do not have a complete understanding of the interaction between local and global effects in gauge theory. This has an increasing impact on our ability to access the non-perturbative regime of most of our theories. Our theories, while being based on gauge groups considered to be simple or semi-simple and connected, are expected to be described by their simple local linear approximation, namely the Lie algebras. However, higher homotopy properties resulting in gauge anomalies appear frequently in theories of physical interest. Our assumption that the groups we deal with are simple and simply connected is probably not suitable, and ways to go beyond such assumptions, particularly in gauge theories, where the Lie algebra linear approximation is prevalent, are not known. We approach this problem from two directions: on one side we are explaining the potential role of formal group laws in describing certain higher homotopical properties and interferences with local or perturbative effects, and on the other side, we employ a categorical approach leading to synthetic theory and a way of looking at gauge theories. The topos approach is based on a geometry where the fundamental logic is intuitionistic logic, and hence the ‘tertium non datur’ principle is abandoned. This has a remarkable impact on understanding conformal symmetry and its anomalies in string theory in various dimensions.Keywords: Gauge theory, formal group laws, Topos theory, conformal symmetry
Procedia PDF Downloads 456098 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 1586097 Silicon Carbide (SiC) Crystallization Obtained as a Side Effect of SF6 Etching Process
Authors: N. K. A. M. Galvão, A. Godoy Jr., A. L. J. Pereira, G. V. Martins, R. S. Pessoa, H. S. Maciel, M. A. Fraga
Abstract:
Silicon carbide (SiC) is a wide band-gap semiconductor material with very attractive properties, such as high breakdown voltage, chemical inertness, and high thermal and electrical stability, which makes it a promising candidate for several applications, including microelectromechanical systems (MEMS) and electronic devices. In MEMS manufacturing, the etching process is an important step. It has been proved that wet etching of SiC is not feasible due to its high bond strength and high chemical inertness. In view of this difficulty, the plasma etching technique has been applied with paramount success. However, in most of these studies, only the determination of the etching rate and/or morphological characterization of SiC, as well as the analysis of the reactive ions present in the plasma, are lowly explored. There is a lack of results in the literature on the chemical and structural properties of SiC after the etching process [4]. In this work, we investigated the etching process of sputtered amorphous SiC thin films on Si substrates in a reactive ion etching (RIE) system using sulfur hexafluoride (SF6) gas under different RF power. The results of the chemical and structural analyses of the etched films revealed that, for all conditions, a SiC crystallization occurred, in addition to fluoride contamination. In conclusion, we observed that SiC crystallization is a side effect promoted by structural, morphological and chemical changes caused by RIE SF6 etching process.Keywords: plasma etching, plasma deposition, Silicon Carbide, microelectromechanical systems
Procedia PDF Downloads 776096 Removal of Textile Dye from Industrial Wastewater by Natural and Modified Diatomite
Authors: Hakim Aguedal, Abdelkader Iddou, Abdallah Aziz, Djillali Reda Merouani, Ferhat Bensaleh, Saleh Bensadek
Abstract:
The textile industry produces high amount of colored effluent each year. The management or treatment of these discharges depends on the applied techniques. Adsorption is one of wastewater treatment techniques destined to treat this kind of pollution, and the performance and efficiency predominantly depend on the nature of the adsorbent used. Therefore, scientific research is directed towards the development of new materials using different physical and chemical treatments to improve their adsorption capacities. In the same perspective, we looked at the effect of the heat treatment on the effectiveness of diatomite, which is found in abundance in Algeria. The textile dye Orange Bezaktiv (SRL-150) which is used as organic pollutants in this study is provided by the textile company SOITEXHAM in Oran city (west Algeria). The effect of different physicochemical parameters on the adsorption of SRL-150 on natural and modified diatomite is studied, and the results of the kinetics and adsorption isotherms were modeled.
Keywords: wastewater treatment, diatomite, adsorption, dye pollution, kinetic, isotherm
Procedia PDF Downloads 2826095 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 4056094 Implementation and Challenges of Assessment Methods in the Case of Physical Education Class in Some Selected Preparatory Schools of Kirkos Sub-City
Authors: Kibreab Alene Fenite
Abstract:
The purpose of this study is to investigate the implementation and challenges of different assessment methods for physical education class in some selected preparatory schools of kirkos sub city. The participants in this study are teachers, students, department heads and school principals from 4 selected schools. Of the total 8 schools offering in kirkos sub city 4 schools (Dandi Boru, Abiyot Kirse, Assay, and Adey Ababa) are selected by using simple random sampling techniques and from these schools all (100%) of teachers, 100% of department heads and school principals are taken as a sample as their number is manageable. From the total 2520 students, 252 (10%) of students are selected using simple random sampling. Accordingly, 13 teachers, 252 students, 4 department heads and 4 school principals are taken as a sample from the 4 selected schools purposefully. As a method of data gathering tools; questionnaire and interview are employed. To analyze the collected data, both quantitative and qualitative methods are used. The result of the study revealed that assessment in physical education does not implement properly: lack of sufficient materials, inadequate time allotment, large class size, and lack of collaboration and working together of teachers towards assessing the performance of students, absence of guidelines to assess the physical education subject, no different assessment method that is implementing on students with disabilities in line with their special need are found as major challenges in implementing the current assessment method of physical education. To overcome these problems the following recommendations have been forwarded. These are: the necessary facilities and equipment should be available; In order to make reliable, accurate, objective and relevant assessment, teachers of physical education should be familiarized with different assessment techniques; Physical education assessment guidelines should be prepared, and guidelines should include different types of assessment methods; qualified teachers should be employed, and different teaching room must be build.Keywords: assessment, challenges, equipment, guidelines, implementation, performance
Procedia PDF Downloads 2856093 Equalization Algorithm for the Optical OFDM System Based on the Fractional Fourier Transform
Authors: A. Cherifi, B. Bouazza, A. O. Dahmane, B. Yagoubi
Abstract:
Transmission over Optical channels will introduce inter-symbol interference (ISI) as well as inter-channel (or inter-carrier) interference (ICI). To decrease the effects of ICI, this paper proposes equalizer for the Optical OFDM system based on the fractional Fourier transform (FrFFT). In this FrFT-OFDM system, traditional Fourier transform is replaced by fractional Fourier transform to modulate and demodulate the data symbols. The equalizer proposed consists of sampling the received signal in the different time per time symbol. Theoretical analysis and numerical simulation are discussed.Keywords: OFDM, (FrFT) fractional fourier transform, optical OFDM, equalization algorithm
Procedia PDF Downloads 4316092 Genetic Algorithms for Parameter Identification of DC Motor ARMAX Model and Optimal Control
Authors: A. Mansouri, F. Krim
Abstract:
This paper presents two techniques for DC motor parameters identification. We propose a numerical method using the adaptive extensive recursive least squares (AERLS) algorithm for real time parameters estimation. This algorithm, based on minimization of quadratic criterion, is realized in simulation for parameters identification of DC motor autoregressive moving average with extra inputs (ARMAX). As advanced technique, we use genetic algorithms (GA) identification with biased estimation for high dynamic performance speed regulation. DC motors are extensively used in variable speed drives, for robot and solar panel trajectory control. GA effectiveness is derived through comparison of the two approaches.Keywords: ARMAX model, DC motor, AERLS, GA, optimization, parameter identification, PID speed regulation
Procedia PDF Downloads 3866091 Advanced Machine Learning Algorithm for Credit Card Fraud Detection
Authors: Manpreet Kaur
Abstract:
When legitimate credit card users are mistakenly labelled as fraudulent in numerous financial delated applications, there are numerous ethical problems. The innovative machine learning approach we have suggested in this research outperforms the current models and shows how to model a data set for credit card fraud detection while minimizing false positives. As a result, we advise using random forests as the best machine learning method for predicting and identifying credit card transaction fraud. The majority of victims of these fraudulent transactions were discovered to be credit card users over the age of 60, with a higher percentage of fraudulent transactions taking place between the specific hours.Keywords: automated fraud detection, isolation forest method, local outlier factor, ML algorithm, credit card
Procedia PDF Downloads 1196090 BROTHERS: World-class Ergonomic Sofa Development
Authors: Aminur Rahman
Abstract:
The Unique feature of BROTHERS Furniture sofa stands in ergonomic Design, skilled hand work and art work. Present world market is passing through a contentious competitive situation that is rapidly and dramatic. Competitive strategy concerns how to create competitive advantage in upholstery businesses. In order to competitive advantage in upholstery sofa market, Design and develop a sofa that have to ergonomic features. Design an ergonomic upholstery sofa knowing and understanding the appropriate seat depth, seat height, angle between Seat & back, back height which is concurrent market demand, world class sofa has to incorporate ergonomic factors. The study the relationships between human, seat and context variables comfort and discomfort. We must have conduct market survey among users whose are need and use sofa. Health & safety factors should be examined from a variety of angle. An attractive design and meet customer requirements, ergonomically fit should be considered for sofa development. This paper will explain how to design & develop sofa’s as per standard specifications which have ergonomic features for users all over the world.Keywords: ergonomics, angle between seat & back, standard dimension, seat comfort
Procedia PDF Downloads 1446089 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging
Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa
Abstract:
Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.Keywords: breast, machine learning, MRI, radiomics
Procedia PDF Downloads 2716088 An Observation Approach of Reading Order for Single Column and Two Column Layout Template
Authors: In-Tsang Lin, Chiching Wei
Abstract:
Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.Keywords: document processing, reading order, observation method, layout recognition
Procedia PDF Downloads 1846087 Scientific Linux Cluster for BIG-DATA Analysis (SLBD): A Case of Fayoum University
Authors: Hassan S. Hussein, Rania A. Abul Seoud, Amr M. Refaat
Abstract:
Scientific researchers face in the analysis of very large data sets that is increasing noticeable rate in today’s and tomorrow’s technologies. Hadoop and Spark are types of software that developed frameworks. Hadoop framework is suitable for many Different hardware platforms. In this research, a scientific Linux cluster for Big Data analysis (SLBD) is presented. SLBD runs open source software with large computational capacity and high performance cluster infrastructure. SLBD composed of one cluster contains identical, commodity-grade computers interconnected via a small LAN. SLBD consists of a fast switch and Gigabit-Ethernet card which connect four (nodes). Cloudera Manager is used to configure and manage an Apache Hadoop stack. Hadoop is a framework allows storing and processing big data across the cluster by using MapReduce algorithm. MapReduce algorithm divides the task into smaller tasks which to be assigned to the network nodes. Algorithm then collects the results and form the final result dataset. SLBD clustering system allows fast and efficient processing of large amount of data resulting from different applications. SLBD also provides high performance, high throughput, high availability, expandability and cluster scalability.Keywords: big data platforms, cloudera manager, Hadoop, MapReduce
Procedia PDF Downloads 3636086 A Method for Solving a Bi-Objective Transportation Problem under Fuzzy Environment
Authors: Sukhveer Singh, Sandeep Singh
Abstract:
A bi-objective fuzzy transportation problem with the objectives to minimize the total fuzzy cost and fuzzy time of transportation without according priorities to them is considered. To the best of our knowledge, there is no method in the literature to find efficient solutions of the bi-objective transportation problem under uncertainty. In this paper, a bi-objective transportation problem in an uncertain environment has been formulated. An algorithm has been proposed to find efficient solutions of the bi-objective transportation problem under uncertainty. The proposed algorithm avoids the degeneracy and gives the optimal solution faster than other existing algorithms for the given uncertain transportation problem.Keywords: uncertain transportation problem, efficient solution, ranking function, fuzzy transportation problem
Procedia PDF Downloads 5306085 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data
Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu
Abstract:
Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq
Procedia PDF Downloads 1466084 Comprehensive Approach to Control Virus Infection and Energy Consumption in An Occupant Classroom
Authors: SeyedKeivan Nateghi, Jan Kaczmarczyk
Abstract:
People nowadays spend most of their time in buildings. Accordingly, maintaining a good quality of indoor air is very important. New universal matters related to the prevalence of Covid-19 also highlight the importance of indoor air conditioning in reducing the risk of virus infection. Cooling and Heating of a house will provide a suitable zone of air temperature for residents. One of the significant factors in energy demand is energy consumption in the building. In general, building divisions compose more than 30% of the world's fundamental energy requirement. As energy demand increased, greenhouse effects emerged that caused global warming. Regardless of the environmental damage to the ecosystem, it can spread infectious diseases such as malaria, cholera, or dengue to many other parts of the world. With the advent of the Covid-19 phenomenon, the previous instructions to reduce energy consumption are no longer responsive because they increase the risk of virus infection among people in the room. Two problems of high energy consumption and coronavirus infection are opposite. A classroom with 30 students and one teacher in Katowice, Poland, considered controlling two objectives simultaneal. The probability of transmission of the disease is calculated from the carbon dioxide concentration of people. Also, in a certain period, the amount of energy consumption is estimated by EnergyPlus. The effect of three parameters of number, angle, and time or schedule of opening windows on the probability of infection transmission and energy consumption of the class were investigated. Parameters were examined widely to determine the best possible condition for simultaneous control of infection spread and energy consumption. The number of opening windows is discrete (0,3), and two other parameters are continuous (0,180) and (8 AM, 2 PM). Preliminary results show that changes in the number, angle, and timing of window openings significantly impact the likelihood of virus transmission and class energy consumption. The greater the number, tilt, and timing of window openings, the less likely the student will transmit the virus. But energy consumption is increasing. When all the windows were closed at all hours of the class, the energy consumption for the first day of January was only 0.2 megajoules. In comparison, the probability of transmitting the virus per person in the classroom is more than 45%. But when all windows were open at maximum angles during class, the chance of transmitting the infection was reduced to 0.35%. But the energy consumption will be 36 megajoules. Therefore, school classrooms need an optimal schedule to control both functions. In this article, we will present a suitable plan for the classroom with natural ventilation through windows to control energy consumption and the possibility of infection transmission at the same time.Keywords: Covid-19, energy consumption, building, carbon dioxide, energyplus
Procedia PDF Downloads 1046083 Trapping Efficiency of Highly Effective Slow Released Formulations of Biodegradable Waxes with Methyl Eugenol Against Bactrocera zonata
Authors: Waleed Afzal Naveed, Muhammd Dildar Gogi, Mubashir Iqbal, Muhammad Junaid Nisar, Muhammad Hamza Khaliq, Faisal Munir
Abstract:
Experiment was carried out to evaluate the performance of highly effective Slow-Released Formulations (SRF) of Methyl eugenol with Lanolin wax, Candellila wax, Bee-wax, Carnauba wax and paraffin wax in the orchard of University of Agriculture Faisalabad, Pakistan against fruit flies. The waxes were mixed with methyl eugenol in 1:9 ratio. The results revealed that SRF of Candellila, Paraffin, Bees and Carnauba wax attracted 13.77, 11, 8.15 and 7.23 flies/day/trap which was 2.6, 2, 1.5 and 1.4 times higher than standard respectively and exhibited 41.42%, 32.05%, 20.98% and 12.87% attractive index respectively, proved moderately attractive slow-released formulation to B. zonata and was catagorized as Class-II slow-released formulation (AI = 11-50%). However, SRF of Lanolin wax trapped 1.81 flies/day/trap which was 3 times less than standard and exhibited -61.86% attractive index proved little or non attractive slow-released formulation and was categorized as Class-I slow-released formulation for B. zonata (AI < 11%).Keywords: biodegradable waxes, slow-released formulation, Bactrocera zonata, methyl euginol
Procedia PDF Downloads 2616082 Synthesis, Characterization, and Biological Evaluation of 1,3,4-Mercaptooxadiazole Ether Derivatives Analogs as Antioxidant, Cytotoxic, and Molecular Docking Studies
Authors: Desta Gebretekle Shiferaw, Balakrishna Kalluraya
Abstract:
Oxadiazoles and their derivatives with thioether functionalities represent a new and exciting class of physiologically active heterocyclic compounds. Several molecules with these moieties play a vital role in pharmaceuticals because of their diverse biological activities. This paper describes a new class of 1,3,4- oxadiazole-2-thioethers with acetophenone, coumarin, and N-phenyl acetamide residues (S-alkylation), with the hope that the addition of various biologically active molecules will have a synergistic effect on anticancer activity. The structure of the synthesized title compounds was determined by the combined methods of IR, proton-NMR, carbon-13-NMR, and mass spectrometry. Further, all the newly prepared molecules were assessed against their antioxidant activity. Furthermore, four compounds were assessed for their molecular docking interactions and cytotoxicity activity. The synthesized derivatives have shown moderate antioxidant activity compared to the standard BHA. The IC50 of the tilted molecules (11b, 11c, 13b, and 14b) observed for in vitro anti-cancer activities were 11.20, 15.73, 59.61, and 27.66 g/ml at 72-hour treatment time against the A549 cell lines, respectively. The tested compounds' biological evaluation showed that 11b is the most effective molecule in the series.Keywords: antioxidant activity, cytotoxicity activity, molecular docking, 1, 3, 4-Oxadiazole-2 thioether derivatives
Procedia PDF Downloads 936081 MapReduce Algorithm for Geometric and Topological Information Extraction from 3D CAD Models
Authors: Ahmed Fradi
Abstract:
In a digital world in perpetual evolution and acceleration, data more and more voluminous, rich and varied, the new software solutions emerged with the Big Data phenomenon offer new opportunities to the company enabling it not only to optimize its business and to evolve its production model, but also to reorganize itself to increase competitiveness and to identify new strategic axes. Design and manufacturing industrial companies, like the others, face these challenges, data represent a major asset, provided that they know how to capture, refine, combine and analyze them. The objective of our paper is to propose a solution allowing geometric and topological information extraction from 3D CAD model (precisely STEP files) databases, with specific algorithm based on the programming paradigm MapReduce. Our proposal is the first step of our future approach to 3D CAD object retrieval.Keywords: Big Data, MapReduce, 3D object retrieval, CAD, STEP format
Procedia PDF Downloads 543