Search results for: separately excited synchronous machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3699

Search results for: separately excited synchronous machine

2289 Optical Emission Studies of Laser Produced Lead Plasma: Measurements of Transition Probabilities of the 6P7S → 6P2 Transitions Array

Authors: Javed Iqbal, R. Ahmed, M. A. Baig

Abstract:

We present new data on the optical emission spectra of the laser produced lead plasma using a pulsed Nd:YAG laser at 1064 nm (pulse energy 400 mJ, pulse width 5 ns, 10 Hz repetition rate) in conjunction with a set of miniature spectrometers covering the spectral range from 200 nm to 720 nm. Well resolved structure due to the 6p7s → 6p2 transition array of neutral lead and a few multiplets of singly ionized lead have been observed. The electron temperatures have been calculated in the range (9000 - 10800) ± 500 K using four methods; two line ratio, Boltzmann plot, Saha-Boltzmann plot and Morrata method whereas, the electron number densities have been determined in the range (2.0 – 8.0) ± 0.6 ×1016 cm-3 using the Stark broadened line profiles of neutral lead lines, singly ionized lead lines and hydrogen Hα-line. Full width at half maximum (FWHM) of a number of neutral and singly ionized lead lines have been extracted by the Lorentzian fit to the experimentally observed line profiles. Furthermore, branching fractions have been deduced for eleven lines of the 6p7s → 6p2 transition array in lead whereas the absolute values of the transition probabilities have been calculated by combining the experimental branching fractions with the life times of the excited levels The new results are compared with the existing data showing a good agreement.

Keywords: LIBS, plasma parameters, transition probabilities, branching fractions, stark width

Procedia PDF Downloads 270
2288 Hazard Alert in Malaysia Related to Occupational Safety and Health

Authors: Atikah Binti Azudin, Nurin Nazlah Binti Muhamad Yani, Nur Alya Nadhirah Binti Naaidith, Nur Amylia Wahida Binti Mat Ayob, Nurshamimi Shakirah Binti Suboh, Nur Auni Batrisyia Binti Md. Zaini, Nur Aziemah Binti Mohamad, Nurul Suffiyah Binti Sa’Dun, Sabrina Sasha Izzati Binti Zubaile, Umi Huwaina Binti Ahmiruddin, Wan Nur Shafawati Binti Wan Ghazali

Abstract:

A hazard alert is intended to provide brief information about significant incidents or existing difficulties in Department workplaces. The alert gives guidelines for proper processes, practices, and controls to be applied. When operated in accordance with the manufacturer's instructions, any machine or tool utilized at work provides a safe and dependable platform for workers to accomplish job duties. However, when not utilized appropriately, the machine might pose a major hazard to employees. Employers have a duty to keep employees safe in this scenario. This Hazard Alert outlines specific occupational dangers and the controls that employers must apply to prevent injury or fatal accidents. There have been several cases of hazard alerts in Malaysia, which have had a negative impact on a few workers. Looking on the bright side, we can overcome every incident in a variety of ways. One of these is that only qualified individuals operate mobile machinery and equipment. In addition, employees may also perform frequent pre-use inspections of machinery to discover and fix flaws. Hazard alert is very important, and this study would cover a variety of subjects, including the methods employed.

Keywords: safe, hazard, impacts, duties.

Procedia PDF Downloads 77
2287 Perspectives of Saudi Students on Reasons for Seeking Private Tutors in English

Authors: Ghazi Alotaibi

Abstract:

The current study examined and described the views of secondary school students and their parents on their reasons for seeking private tutors in English. These views were obtained through two group interviews with the students and parents separately. Several causes were brought up during the two interviews. These causes included difficulty of the English language, weak teacher performance, the need to pass exams with high marks, lack of parents’ follow-up of student school performance, social pressure, variability in student comprehension levels at school, weak English foundation in previous school years, repeated student absence from school, large classes, as well as English teachers’ heavy teaching loads. The study started with a description of the EFL educational system in Saudi Arabia and concluded with recommendations for the improvement of the school learning environment.

Keywords: english, learning difficulty, private tutoring, Saudi, teaching practices, learning environment

Procedia PDF Downloads 439
2286 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 273
2285 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 227
2284 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 167
2283 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities

Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun

Abstract:

As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.

Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning

Procedia PDF Downloads 39
2282 Radiochemical Purity of 68Ga-BCA-Peptides: Separation of All 68Ga Species with a Single iTLC Strip

Authors: Anton A. Larenkov, Alesya Ya Maruk

Abstract:

In the present study, highly effective iTLC single strip method for the determination of radiochemical purity (RCP) of 68Ga-BCA-peptides was developed (with no double-developing, changing of eluents or other additional manipulation). In this method iTLC-SG strips and commonly used eluent TFAaq. (3-5 % (v/v)) are used. The method allows determining each of the key radiochemical forms of 68Ga (colloidal, bound, ionic) separately with the peaks separation being no less than 4 σ. Rf = 0.0-0.1 for 68Ga-colloid; Rf = 0.5-0.6 for 68Ga-BCA-peptides; Rf = 0.9-1.0 for ionic 68Ga. The method is simple and fast: For developing length of 75 mm only 4-6 min is required (versus 18-20 min for pharmacopoeial method). The method has been tested on various compounds (including 68Ga-DOTA-TOC, 68Ga-DOTA-TATE, 68Ga-NODAGA-RGD2 etc.). The cross-validation work for every specific form of 68Ga showed good correlation between method developed and control (pharmacopoeial) methods. The method can become convenient and much more informative replacement for pharmacopoeial methods, including HPLC.

Keywords: DOTA-TATE, 68Ga, quality control, radiochemical purity, radiopharmaceuticals, TLC

Procedia PDF Downloads 277
2281 Field Oriented Control of Electrical Motor for Efficiency Improvement of Aerial Vehicle

Authors: Francois Defay

Abstract:

Uses of Unmanned aerial vehicle (UAV) are increasing for many applicative cases. Long endurance UAVs are required for inspection or transportation in some deserted places. The global optimization of the efficiency is the aim of the works in ISAE-SUPAERO. From the propulsive part until the motor control, the global optimization can increase significantly the global efficiency. This paper deals with the global improvement of the efficiency of the electrical propulsion for the aerial vehicle. The application case of study is a small airplane of 2kg. A global modelization is presented in order to validate the electrical engine in a complete simulation from aerodynamics to battery. The classical control of the synchronous permanent drive is compared to the field-oriented control which is not yet applied for UAVs. The experimental results presented show an increase of more than 10 percent of the efficiency. A complete modelization and simulation based on Matlab/ Simulink are presented in this paper and compared to the experimental study. Finally this paper presents solutions to increase the endurance of the electrical aerial vehicle and provide models to optimize the global consumption for a specific mission. The next step is to use this model and the control to work with distributed propulsion which is the future for small distance plane.

Keywords: electrical propulsion, endurance, field-oriented control, UAV

Procedia PDF Downloads 224
2280 Applying Artificial Neural Networks to Predict Speed Skater Impact Concussion Risk

Authors: Yilin Liao, Hewen Li, Paula McConvey

Abstract:

Speed skaters often face a risk of concussion when they fall on the ice floor and impact crash mats during practices and competitive races. Several variables, including those related to the skater, the crash mat, and the impact position (body side/head/feet impact), are believed to influence the severity of the skater's concussion. While computer simulation modeling can be employed to analyze these accidents, the simulation process is time-consuming and does not provide rapid information for coaches and teams to assess the skater's injury risk in competitive events. This research paper promotes the exploration of the feasibility of using AI techniques for evaluating skater’s potential concussion severity, and to develop a fast concussion prediction tool using artificial neural networks to reduce the risk of treatment delays for injured skaters. The primary data is collected through virtual tests and physical experiments designed to simulate skater-mat impact. It is then analyzed to identify patterns and correlations; finally, it is used to train and fine-tune the artificial neural networks for accurate prediction. The development of the prediction tool by employing machine learning strategies contributes to the application of AI methods in sports science and has theoretical involvements for using AI techniques in predicting and preventing sports-related injuries.

Keywords: artificial neural networks, concussion, machine learning, impact, speed skater

Procedia PDF Downloads 80
2279 Developing a Virtual Reality System to Assist in Anatomy Teaching and Evaluating the Effectiveness of That System

Authors: Tarek Abdelkader, Suresh Selvaraj, Prasad Iyer, Yong Mun Hin, Hajmath Begum, P. Gopalakrishnakone

Abstract:

Nowadays, more and more educational institutes, as well as students, rely on 3D anatomy programs as an important tool that helps students correlate the actual locations of anatomical structures in a 3D dimension. Lately, virtual reality (VR) is gaining more favor from the younger generations due to its higher interactive mode. As a result, using virtual reality as a gamified learning platform for anatomy became the current goal. We present a model where a Virtual Human Anatomy Program (VHAP) was developed to assist with the anatomy learning experience of students. The anatomy module has been built, mostly, from real patient CT scans. Segmentation and surface rendering were used to create the 3D model by direct segmentation of CT scans for each organ individually and exporting that model as a 3D file. After acquiring the 3D files for all needed organs, all the files were introduced into a Virtual Reality environment as a complete body anatomy model. In this ongoing experiment, students from different Allied Health orientations are testing the VHAP. Specifically, the cardiovascular system has been selected as the focus system of study since all of our students finished learning about it in the 1st trimester. The initial results suggest that the VHAP system is adding value to the learning process of our students, encouraging them to get more involved and to ask more questions. Involved students comments show that they are excited about the VHAP system with comments about its interactivity as well as the ability to use it solo as a self-learning aid in combination with the lectures. Some students also experienced minor side effects like dizziness.

Keywords: 3D construction, health sciences, teaching pedagogy, virtual reality

Procedia PDF Downloads 145
2278 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model

Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.

Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma

Procedia PDF Downloads 68
2277 Noise Measurement and Awareness at Construction Site: A Case Study

Authors: Feiruz Ab'lah, Zarini Ismail, Mohamad Zaki Hassan, Siti Nadia Mohd Bakhori, Mohamad Azlan Suhot, Mohd Yusof Md. Daud, Shamsul Sarip

Abstract:

The construction industry is one of the major sectors in Malaysia. Apart from providing facilities, services, and goods it also offers employment opportunities to local and foreign workers. In fact, the construction workers are exposed to a hazardous level of noises that generated from various sources including excavators, bulldozers, concrete mixer, and piling machines. Previous studies indicated that the piling and concrete work was recorded as the main source that contributed to the highest level of noise among the others. Therefore, the aim of this study is to obtain the noise exposure during piling process and to determine the awareness of workers against noise pollution at the construction site. Initially, the reading of noise was obtained at construction site by using a digital sound level meter (SLM), and noise exposure to the workers was mapped. Readings were taken from four different distances; 5, 10, 15 and 20 meters from the piling machine. Furthermore, a set of questionnaire was also distributed to assess the knowledge regarding noise pollution at the construction site. The result showed that the mean noise level at 5m distance was more than 90 dB which exceeded the recommended level. Although the level of awareness regarding the effect of noise pollution is satisfactory, majority of workers (90%) still did not wear ear protecting device during work period. Therefore, the safety module guidelines related to noise pollution controls should be implemented to provide a safe working environment and prevent initial occupational hearing loss.

Keywords: construction, noise awareness, noise pollution, piling machine

Procedia PDF Downloads 362
2276 Hydrometallurgical Treatment of Abu Ghalaga Ilmenite Ore

Authors: I. A. Ibrahim, T. A. Elbarbary, N. Abdelaty, A. T. Kandil, H. K. Farhan

Abstract:

The present work aims to study the leaching of Abu Ghalaga ilmenite ore by hydrochloric acid and simultaneous reduction by iron powder method to dissolve its titanium and iron contents. Iron content in the produced liquor is separated by solvent extraction using TBP as a solvent. All parameters affecting the efficiency of the dissolution process were separately studied including the acid concentration, solid/liquid ratio which controls the ilmenite/acid molar ratio, temperature, time and grain size. The optimum conditions at which maximum leaching occur are 30% HCl acid with a solid/liquid ratio of 1/30 at 80 °C for 4 h using ore ground to -350 mesh size. At the same time, all parameters affecting on solvent extraction and stripping of iron content from the produced liquor were studied. Results show that the best extraction is at solvent/solution 1/1 by shaking at 240 RPM for 45 minutes at 30 °C whereas best striping of iron at H₂O/solvent 2/1.

Keywords: ilmenite ore, leaching, titanium solvent extraction, Abu Ghalaga ilmenite ore

Procedia PDF Downloads 272
2275 Improved Classification Procedure for Imbalanced and Overlapped Situations

Authors: Hankyu Lee, Seoung Bum Kim

Abstract:

The issue with imbalance and overlapping in the class distribution becomes important in various applications of data mining. The imbalanced dataset is a special case in classification problems in which the number of observations of one class (i.e., major class) heavily exceeds the number of observations of the other class (i.e., minor class). Overlapped dataset is the case where many observations are shared together between the two classes. Imbalanced and overlapped data can be frequently found in many real examples including fraud and abuse patients in healthcare, quality prediction in manufacturing, text classification, oil spill detection, remote sensing, and so on. The class imbalance and overlap problem is the challenging issue because this situation degrades the performance of most of the standard classification algorithms. In this study, we propose a classification procedure that can effectively handle imbalanced and overlapped datasets by splitting data space into three parts: nonoverlapping, light overlapping, and severe overlapping and applying the classification algorithm in each part. These three parts were determined based on the Hausdorff distance and the margin of the modified support vector machine. An experiments study was conducted to examine the properties of the proposed method and compared it with other classification algorithms. The results showed that the proposed method outperformed the competitors under various imbalanced and overlapped situations. Moreover, the applicability of the proposed method was demonstrated through the experiment with real data.

Keywords: classification, imbalanced data with class overlap, split data space, support vector machine

Procedia PDF Downloads 296
2274 Structural and Modal Analyses of an s1223 High-Lift Airfoil Wing for Drone Design

Authors: Johnson Okoduwa Imumbhon, Mohammad Didarul Alam, Yiding Cao

Abstract:

Structural analyses are commonly employed to test the integrity of aircraft component systems in the design stage to demonstrate the capability of the structural components to withstand what it was designed for, as well as to predict potential failure of the components. The analyses are also essential for weight minimization and selecting the most resilient materials that will provide optimal outcomes. This research focuses on testing the structural nature of a high-lift low Reynolds number airfoil profile design, the Selig S1223, under certain loading conditions for a drone model application. The wing (ribs, spars, and skin) of the drone model was made of carbon fiber-reinforced polymer and designed in SolidWorks, while the finite element analysis was carried out in ANSYS mechanical in conjunction with the lift and drag forces that were derived from the aerodynamic airfoil analysis. Additionally, modal analysis was performed to calculate the natural frequencies and the mode shapes of the wing structure. The structural strain and stress determined the minimal deformations under the wing loading conditions, and the modal analysis showed the prominent modes that were excited by the given forces. The research findings from the structural analysis of the S1223 high-lift airfoil indicated that it is applicable for use in an unmanned aerial vehicle as well as a novel reciprocating-airfoil-driven vertical take-off and landing (VTOL) drone model.

Keywords: CFRP, finite element analysis, high-lift, S1223, strain, stress, VTOL

Procedia PDF Downloads 204
2273 Merging Sequence Diagrams Based Slicing

Authors: Bouras Zine Eddine, Talai Abdelouaheb

Abstract:

The need to merge software artifacts seems inherent to modern software development. Distribution of development over several teams and breaking tasks into smaller, more manageable pieces are an effective means to deal with the kind of complexity. In each case, the separately developed artifacts need to be assembled as efficiently as possible into a consistent whole in which the parts still function as described. Also, earlier changes are introduced into the life cycle and easier is their management by designers. Interaction-based specifications such as UML sequence diagrams have been found effective in this regard. As a result, sequence diagrams can be used not only for capturing system behaviors but also for merging changes in order to create a new version. The objective of this paper is to suggest a new approach to deal with the problem of software merging at the level of sequence diagrams by using the concept of dependence analysis that captures, formally, all mapping and differences between elements of sequence diagrams and serves as a key concept to create a new version of sequence diagram.

Keywords: system behaviors, sequence diagram merging, dependence analysis, sequence diagram slicing

Procedia PDF Downloads 330
2272 Smart Disassembly of Waste Printed Circuit Boards: The Role of IoT and Edge Computing

Authors: Muhammad Mohsin, Fawad Ahmad, Fatima Batool, Muhammad Kaab Zarrar

Abstract:

The integration of the Internet of Things (IoT) and edge computing devices offers a transformative approach to electronic waste management, particularly in the dismantling of printed circuit boards (PCBs). This paper explores how these technologies optimize operational efficiency and improve environmental sustainability by addressing challenges such as data security, interoperability, scalability, and real-time data processing. Proposed solutions include advanced machine learning algorithms for predictive maintenance, robust encryption protocols, and scalable architectures that incorporate edge computing. Case studies from leading e-waste management facilities illustrate benefits such as improved material recovery efficiency, reduced environmental impact, improved worker safety, and optimized resource utilization. The findings highlight the potential of IoT and edge computing to revolutionize e-waste dismantling and make the case for a collaborative approach between policymakers, waste management professionals, and technology developers. This research provides important insights into the use of IoT and edge computing to make significant progress in the sustainable management of electronic waste

Keywords: internet of Things, edge computing, waste PCB disassembly, electronic waste management, data security, interoperability, machine learning, predictive maintenance, sustainable development

Procedia PDF Downloads 4
2271 A Photoemission Study of Dye Molecules Deposited by Electrospray on rutile TiO2 (110)

Authors: Nouf Alharbi, James O'shea

Abstract:

For decades, renewable energy sources have received considerable global interest due to the increase in fossil fuel consumption. The abundant energy produced by sunlight makes dye-sensitised solar cells (DSSCs) a promising alternative compared to conventional silicon and thin film solar cells due to their transparency and tunable colours, which make them suitable for applications such as windows and glass facades. The transfer of an excited electron onto the surface is an important procedure in the DSSC system, so different groups of dye molecules were studied on the rutile TiO2 (110) surface. Currently, the study of organic dyes has become an interest of researchers due to ruthenium being a rare and expensive metal, and metal-free organic dyes have many features, such as high molar extinction coefficients, low manufacturing costs, and ease of structural modification and synthesis. There are, of course, some groups that have developed organic dyes and exhibited lower light-harvesting efficiency ranging between 4% and 8%. Since most dye molecules are complicated or fragile to be deposited by thermal evaporation or sublimation in the ultra-high vacuum (UHV), all dyes (i.e, D5, SC4, and R6) in this study were deposited in situ using the electrospray deposition technique combined with X-ray photoelectron spectroscopy (XPS) as an alternative method to obtain high-quality monolayers of titanium dioxide. These organic molecules adsorbed onto rutile TiO2 (110) are explored by XPS, which can be used to obtain element-specific information on the chemical structure and study bonding and interaction sites on the surface.

Keywords: dyes, deposition, electrospray, molecules, organic, rutile, sensitised, XPS

Procedia PDF Downloads 62
2270 A Machine Learning Approach for Earthquake Prediction in Various Zones Based on Solar Activity

Authors: Viacheslav Shkuratskyy, Aminu Bello Usman, Michael O’Dea, Saifur Rahman Sabuj

Abstract:

This paper examines relationships between solar activity and earthquakes; it applied machine learning techniques: K-nearest neighbour, support vector regression, random forest regression, and long short-term memory network. Data from the SILSO World Data Center, the NOAA National Center, the GOES satellite, NASA OMNIWeb, and the United States Geological Survey were used for the experiment. The 23rd and 24th solar cycles, daily sunspot number, solar wind velocity, proton density, and proton temperature were all included in the dataset. The study also examined sunspots, solar wind, and solar flares, which all reflect solar activity and earthquake frequency distribution by magnitude and depth. The findings showed that the long short-term memory network model predicts earthquakes more correctly than the other models applied in the study, and solar activity is more likely to affect earthquakes of lower magnitude and shallow depth than earthquakes of magnitude 5.5 or larger with intermediate depth and deep depth.

Keywords: k-nearest neighbour, support vector regression, random forest regression, long short-term memory network, earthquakes, solar activity, sunspot number, solar wind, solar flares

Procedia PDF Downloads 60
2269 An Improved Face Recognition Algorithm Using Histogram-Based Features in Spatial and Frequency Domains

Authors: Qiu Chen, Koji Kotani, Feifei Lee, Tadahiro Ohmi

Abstract:

In this paper, we propose an improved face recognition algorithm using histogram-based features in spatial and frequency domains. For adding spatial information of the face to improve recognition performance, a region-division (RD) method is utilized. The facial area is firstly divided into several regions, then feature vectors of each facial part are generated by Binary Vector Quantization (BVQ) histogram using DCT coefficients in low frequency domains, as well as Local Binary Pattern (LBP) histogram in spatial domain. Recognition results with different regions are first obtained separately and then fused by weighted averaging. Publicly available ORL database is used for the evaluation of our proposed algorithm, which is consisted of 40 subjects with 10 images per subject containing variations in lighting, posing, and expressions. It is demonstrated that face recognition using RD method can achieve much higher recognition rate.

Keywords: binary vector quantization (BVQ), DCT coefficients, face recognition, local binary patterns (LBP)

Procedia PDF Downloads 334
2268 Effects of Temperature and Enzyme Concentration on Quality of Pineapple and Pawpaw Blended Juice

Authors: Ndidi F. Amulu, Calistus N. Ude, Patrick E. Amulu, Nneka N. Uchegbu

Abstract:

The effects of temperature and enzyme concentration on the quality of mixed pineapple and pawpaw blended fruits juice were studied. Extracts of the two fruit juices were separately treated at 70  for 15 min each so as to inactivate micro-organisms. They were analyzed and blended in different proportions of 70% pawpaw and 30% pineapple, 60% pawpaw and 40% pineapple, 50% pineapple and 50% pawpaw, 40% pawpaw and 60% pineapple. The characterization of the fresh pawpaw and pineapple juice before blending showed that the juices have good quality. The high water content of the product may have affected the viscosity, vitamin C content and total soluble solid of the blended juice to be low. The effects of the process parameters on the quality showed that better quality of the blended juice can be obtained within the optimum temperature range of (50-70 °C) and enzyme concentration range (0.12-0.18 w/v). The ratio of mix 60% pineapple juice: 40% pawpaw juice has better quality. This showed that pawpaw and pineapple juices can blend effectively to produce a quality juice.

Keywords: clarification, pawpaw, pineapple, viscosity, vitamin C

Procedia PDF Downloads 293
2267 Near-Infrared Optogenetic Manipulation of a Channelrhodopsin via Upconverting Nanoparticles

Authors: Kanchan Yadav, Ai-Chuan Chou, Rajesh Kumar Ulaganathan, Hua-De Gao, Hsien-Ming Lee, Chien-Yuan Pan, Yit-Tsong Chen

Abstract:

Optogenetics is an innovative technology now widely adopted by researchers in different fields of the biological sciences. However, due to the weak tissue penetration capability of the short wavelengths used to activate light-sensitive proteins, an invasive light guide has been used in animal studies for photoexcitation of target tissues. Upconverting nanoparticles (UCNPs), which transform near-infrared (NIR) light to short-wavelength emissions, can help address this issue. To improve optogenetic performance, we enhance the target selectivity for optogenetic controls by specifically conjugating the UCNPs with light-sensitive proteins at a molecular level, which shortens the distance as well as enhances the efficiency of energy transfer. We tagged V5 and Lumio epitopes to the extracellular N-terminal of channelrhodopsin-2 with an mCherry conjugated at the intracellular C-terminal (VL-ChR2m) and then bound NeutrAvidin-functionalized UCNPs (NAv-UCNPs) to the VL-ChR2m via a biotinylated antibody against V5 (bV5-Ab). We observed an apparent energy transfer from the excited UCNP (donor) to the bound VL-ChR2m (receptor) by measuring emission-intensity changes at the donor-receptor complex. The successful patch-clamp electrophysiological test and an intracellular Ca2+ elevation observed in the designed UCNP-ChR2 system under optogenetic manipulation confirmed the practical employment of UCNP-assisted NIR-optogenetic functionality. This work represents a significant step toward improving therapeutic optogenetics.

Keywords: Channelrhodopsin-2, near infrared, optogenetics, upconverting nanoparticles

Procedia PDF Downloads 264
2266 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 47
2265 Oil Contents, Mineral Compositions, and Their Correlations in Wild and Cultivated Safflower Seeds

Authors: Rahim Ada, Mustafa Harmankaya, Sadiye Ayse Celik

Abstract:

The safflower seed contains about 25-40% solvent extract and 20-33% fiber. It is well known that dietary phospholipids lower serum cholesterol levels effectively. The nutrient composition of safflower seed changes depending on region, soil and genotypes. This research was made by using of six natural selected (A22, A29, A30, C12, E1, F4, G8, G12, J27) and three commercial (Remzibey, Dincer, Black Sun1) varieties of safflower genotypes. The research was conducted on field conditions for two years (2009 and 2010) in randomized complete block design with three replications in Konya-Turkey ecological conditions. Oil contents, mineral contents and their correlations were determined in the research. According to the results, oil content was ranged from 22.38% to 34.26%, while the minerals were in between the following values: 1469, 04-2068.07 mg kg-1 for Ca, 7.24-11.71 mg kg-1 for B, 13.29-17.41 mg kg-1 for Cu, 51.00-79.35 mg kg-1 for Fe, 3988-6638.34 mg kg-1 for K, 1418.61-2306.06 mg kg-1 for Mg, 11.37-17.76 mg kg-1 for Mn, 4172.33-7059.58 mg kg-1 for P and 32.60-59.00 mg kg-1 for Zn. Correlation analysis that was made separately for the commercial varieties and wild lines showed that high level of oil content was negatively affected by all the investigated minerals except for K and Zn in the commercial varieties.

Keywords: safflower, oil, quality, mineral content

Procedia PDF Downloads 256
2264 Remote Sensing Application on Snow Products and Analyzing Disaster-Forming Environments Xinjiang, China

Authors: Gulijianati Abake, Ryutaro Tateishi

Abstract:

Snow is one kind of special underlying surface, has high reflectivity, low thermal conductivity, and snow broth hydrological effect. Every year, frequent snow disaster in Xinjiang causing considerable economic loss and serious damage to towns and farms, such as livestock casualties, traffic jams and other disaster, therefore monitoring SWE (snow volume) in Xinjiang has a great significance. The problems of how this disaster distributes and what disaster-forming environments are important to its occurrence are the most pressing problems in disaster risk assessment and salvage material arrangement. The present study aims 1) to monitor accurate SWE using MODIS, AMSRE, and CMC data, 2) to establish the regularity of snow disaster outbreaks and the important disaster-forming environmental factors. And a spatial autocorrelation analysis method and a canonical correlation analysis method are used to answer these two questions separately, 3) to prepare the way to salvage material arrangements for snow disasters.

Keywords: snow water equivalent (snow volume), AMSR-E, CMC snow depth, snow disaster

Procedia PDF Downloads 362
2263 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 213
2262 Surface Characterization and Femtosecond-Nanosecond Transient Absorption Dynamics of Bioconjugated Gold Nanoparticles: Insight into the Warfarin Drug-Binding Site of Human Serum Albumin

Authors: Osama K. Abou-Zied, Saba A. Sulaiman

Abstract:

We studied the spectroscopy of 25-nm diameter gold nanoparticles (AuNPs), coated with human serum albumin (HSA) as a model drug carrier. The morphology and coating of the AuNPs were examined using transmission electron microscopy and dynamic light scattering. Resonance energy transfer from the sole tryptophan of HSA (Trp214) to the AuNPs was observed in which the fluorescence quenching of Trp214 is dominated by a static mechanism. Using fluorescein (FL) to probe the warfarin drug-binding site in HSA revealed the unchanged nature of the binding cavity on the surface of the AuNPs, indicating the stability of the protein structure on the metal surface. The transient absorption results of the surface plasmonic resonance (SPR) band of the AuNPs show three ultrafast dynamics that are involved in the relaxation process after excitation at 460 nm. The three decay components were assigned to the electron-electron (~ 400 fs), electron-phonon (~ 2.0 ps) and phonon-phonon (200–250 ps) interactions. These dynamics were not changed upon coating the AuNPs with HSA which indicates the chemical and physical stability of the AuNPs upon bioconjugation. Binding of FL in HSA did not have any measurable effect on the bleach recovery dynamics of the SPR band, although both FL and AuNPs were excited at 460 nm. The current study is important for a better understanding of the physical and dynamical properties of protein-coated metal nanoparticles which are expected to help in optimizing their properties for critical applications in nanomedicine.

Keywords: gold nanoparticles, human serum albumin, fluorescein, femtosecond transient absorption

Procedia PDF Downloads 319
2261 A Neural Network Control for Voltage Balancing in Three-Phase Electric Power System

Authors: Dana M. Ragab, Jasim A. Ghaeb

Abstract:

The three-phase power system suffers from different challenging problems, e.g. voltage unbalance conditions at the load side. The voltage unbalance usually degrades the power quality of the electric power system. Several techniques can be considered for load balancing including load reconfiguration, static synchronous compensator and static reactive power compensator. In this work an efficient neural network is designed to control the unbalanced condition in the Aqaba-Qatrana-South Amman (AQSA) electric power system. It is designed for highly enhanced response time of the reactive compensator for voltage balancing. The neural network is developed to determine the appropriate set of firing angles required for the thyristor-controlled reactor to balance the three load voltages accurately and quickly. The parameters of AQSA power system are considered in the laboratory model, and several test cases have been conducted to test and validate the proposed technique capabilities. The results have shown a high performance of the proposed Neural Network Control (NNC) technique for correcting the voltage unbalance conditions at three-phase load based on accuracy and response time.

Keywords: three-phase power system, reactive power control, voltage unbalance factor, neural network, power quality

Procedia PDF Downloads 179
2260 The Artificial Intelligence Driven Social Work

Authors: Avi Shrivastava

Abstract:

Our world continues to grapple with a lot of social issues. Economic growth and scientific advancements have not completely eradicated poverty, homelessness, discrimination and bias, gender inequality, health issues, mental illness, addiction, and other social issues. So, how do we improve the human condition in a world driven by advanced technology? The answer is simple: we will have to leverage technology to address some of the most important social challenges of the day. AI, or artificial intelligence, has emerged as a critical tool in the battle against issues that deprive marginalized and disadvantaged groups of the right to enjoy benefits that a society offers. Social work professionals can transform their lives by harnessing it. The lack of reliable data is one of the reasons why a lot of social work projects fail. Social work professionals continue to rely on expensive and time-consuming primary data collection methods, such as observation, surveys, questionnaires, and interviews, instead of tapping into AI-based technology to generate useful, real-time data and necessary insights. By leveraging AI’s data-mining ability, we can gain a deeper understanding of how to solve complex social problems and change lives of people. We can do the right work for the right people and at the right time. For example, AI can enable social work professionals to focus their humanitarian efforts on some of the world’s poorest regions, where there is extreme poverty. An interdisciplinary team of Stanford scientists, Marshall Burke, Stefano Ermon, David Lobell, Michael Xie, and Neal Jean, used AI to spot global poverty zones – identifying such zones is a key step in the fight against poverty. The scientists combined daytime and nighttime satellite imagery with machine learning algorithms to predict poverty in Nigeria, Uganda, Tanzania, Rwanda, and Malawi. In an article published by Stanford News, Stanford researchers use dark of night and machine learning, Ermon explained that they provided the machine-learning system, an application of AI, with the high-resolution satellite images and asked it to predict poverty in the African region. “The system essentially learned how to solve the problem by comparing those two sets of images [daytime and nighttime].” This is one example of how AI can be used by social work professionals to reach regions that need their aid the most. It can also help identify sources of inequality and conflict, which could reduce inequalities, according to Nature’s study, titled The role of artificial intelligence in achieving the Sustainable Development Goals, published in 2020. The report also notes that AI can help achieve 79 percent of the United Nation’s (UN) Sustainable Development Goals (SDG). AI is impacting our everyday lives in multiple amazing ways, yet some people do not know much about it. If someone is not familiar with this technology, they may be reluctant to use it to solve social issues. So, before we talk more about the use of AI to accomplish social work objectives, let’s put the spotlight on how AI and social work can complement each other.

Keywords: social work, artificial intelligence, AI based social work, machine learning, technology

Procedia PDF Downloads 89