Search results for: ultra-high precision grinding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1054

Search results for: ultra-high precision grinding

334 Importance of Developing a Decision Support System for Diagnosis of Glaucoma

Authors: Murat Durucu

Abstract:

Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.

Keywords: decision support system, glaucoma, image processing, pattern recognition

Procedia PDF Downloads 286
333 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran

Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard

Abstract:

Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.

Keywords: data mining, ischemic stroke, decision tree, Bayesian network

Procedia PDF Downloads 164
332 Performance Analysis and Multi-Objective Optimization of a Kalina Cycle for Low-Temperature Applications

Authors: Sadegh Sadeghi, Negar Shabani

Abstract:

From a thermal point of view, zeotropic mixtures are likely to be more efficient than azeotropic fluids in low-temperature thermodynamic cycles due to their suitable boiling characteristics. In this study, performance of a low-temperature Kalina cycle with R717/water working fluid used in different existing power plants is mathematically investigated. To analyze the behavior of the cycle, mass conservation, energy conservation, and exergy balance equations are presented. With regard to the similarity in molar mass of R717 (17.03 gr/mol) and water (18.01 gr/mol), there is no need to alter the size of Kalina system components such as turbine and pump. To optimize the cycle energy and exergy efficiencies simultaneously, a constrained multi-objective optimization is carried out applying an Artificial Bee Colony algorithm. The main motivation behind using this algorithm lies on its robustness, reliability, remarkable precision and high–speed convergence rate in dealing with complicated constrained multi-objective problems. Convergence rates of the algorithm for calculating the optimal energy and exergy efficiencies are presented. Subsequently, due to the importance of exergy concept in Kalina cycles, exergy destructions occurring in the components are computed. Finally, the impacts of pressure, temperature, mass fraction and mass flow rate on the energy and exergy efficiencies are elaborately studied.

Keywords: artificial bee colony algorithm, binary zeotropic mixture, constrained multi-objective optimization, energy efficiency, exergy efficiency, Kalina cycle

Procedia PDF Downloads 148
331 Optimization of Fused Deposition Modeling 3D Printing Process via Preprocess Calibration Routine Using Low-Cost Thermal Sensing

Authors: Raz Flieshman, Adam Michael Altenbuchner, Jörg Krüger

Abstract:

This paper presents an approach to optimizing the Fused Deposition Modeling (FDM) 3D printing process through a preprocess calibration routine of printing parameters. The core of this method involves the use of a low-cost thermal sensor capable of measuring tempera-tures within the range of -20 to 500 degrees Celsius for detailed process observation. The calibration process is conducted by printing a predetermined path while varying the process parameters through machine instructions (g-code). This enables the extraction of critical thermal, dimensional, and surface properties along the printed path. The calibration routine utilizes computer vision models to extract features and metrics from the thermal images, in-cluding temperature distribution, layer adhesion quality, surface roughness, and dimension-al accuracy and consistency. These extracted properties are then analyzed to optimize the process parameters to achieve the desired qualities of the printed material. A significant benefit of this calibration method is its potential to create printing parameter profiles for new polymer and composite materials, thereby enhancing the versatility and application range of FDM 3D printing. The proposed method demonstrates significant potential in enhancing the precision and reliability of FDM 3D printing, making it a valuable contribution to the field of additive manufacturing.

Keywords: FDM 3D printing, preprocess calibration, thermal sensor, process optimization, additive manufacturing, computer vision, material profiles

Procedia PDF Downloads 23
330 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 32
329 Embedded System of Signal Processing on FPGA: Underwater Application Architecture

Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad

Abstract:

The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.

Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing

Procedia PDF Downloads 71
328 Using Deep Learning for the Detection of Faulty RJ45 Connectors on a Radio Base Station

Authors: Djamel Fawzi Hadj Sadok, Marrone Silvério Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner

Abstract:

A radio base station (RBS), part of the radio access network, is a particular type of equipment that supports the connection between a wide range of cellular user devices and an operator network access infrastructure. Nowadays, most of the RBS maintenance is carried out manually, resulting in a time consuming and costly task. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. This paper proposes and compares two deep learning solutions to identify attached RJ45 connectors on network ports. We named connector detection, the solution based on object detection, and connector classification, the one based on object classification. With the connector detection, we get an accuracy of 0:934, mean average precision 0:903. Connector classification, get a maximum accuracy of 0:981 and an AUC of 0:989. Although connector detection was outperformed in this study, this should not be viewed as an overall result as connector detection is more flexible for scenarios where there is no precise information about the environment and the possible devices. At the same time, the connector classification requires that information to be well-defined.

Keywords: radio base station, maintenance, classification, detection, deep learning, automation

Procedia PDF Downloads 193
327 Development and Validation of High-Performance Liquid Chromatography Method for the Determination and Pharmacokinetic Study of Linagliptin in Rat Plasma

Authors: Hoda Mahgoub, Abeer Hanafy

Abstract:

Linagliptin (LNG) belongs to dipeptidyl-peptidase-4 (DPP-4) inhibitor class. DPP-4 inhibitors represent a new therapeutic approach for the treatment of type 2 diabetes in adults. The aim of this work was to develop and validate an accurate and reproducible HPLC method for the determination of LNG with high sensitivity in rat plasma. The method involved separation of both LNG and pindolol (internal standard) at ambient temperature on a Zorbax Eclipse XDB C18 column and a mobile phase composed of 75% methanol: 25% formic acid 0.1% pH 4.1 at a flow rate of 1.0 mL.min-1. UV detection was performed at 254nm. The method was validated in compliance with ICH guidelines and found to be linear in the range of 5–1000ng.mL-1. The limit of quantification (LOQ) was found to be 5ng.mL-1 based on 100µL of plasma. The variations for intra- and inter-assay precision were less than 10%, and the accuracy values were ranged between 93.3% and 102.5%. The extraction recovery (R%) was more than 83%. The method involved a single extraction step of a very small plasma volume (100µL). The assay was successfully applied to an in-vivo pharmacokinetic study of LNG in rats that were administered a single oral dose of 10mg.kg-1 LNG. The maximum concentration (Cmax) was found to be 927.5 ± 23.9ng.mL-1. The area under the plasma concentration-time curve (AUC0-72) was 18285.02 ± 605.76h.ng.mL-1. In conclusion, the good accuracy and low LOQ of the bioanalytical HPLC method were suitable for monitoring the full pharmacokinetic profile of LNG in rats. The main advantages of the method were the sensitivity, small sample volume, single-step extraction procedure and the short time of analysis.

Keywords: HPLC, linagliptin, pharmacokinetic study, rat plasma

Procedia PDF Downloads 235
326 The Effect of a 12 Week Rhythmic Movement Intervention on Selected Biomotor Abilities on Academy Rugby Players

Authors: Jocelyn Solomons, Kraak

Abstract:

Rhythmic movement, also referred to as “dance”, involves the execution of different motor skills as well as the integration and sequencing of actions between limbs, timing and spatial precision. The aim of this study was therefore to investigate and compare the effect of a 16-week rhythmic movement intervention on flexibility, dynamic balance, agility, power and local muscular endurance of academy rugby players in the Western Cape, according to positional groups. Players (N ¼ 54) (age 18.66 0.81 years; height 1.76 0.69 cm; weight 76.77 10.69 kg), were randomly divided into a treatment-control [TCA] (n ¼ 28) and a control-treatment [CTB] (n ¼ 26) group. In this crossover experimental design, the interaction effect of the treatment order and the treatment time between the TCA and CTB group, was determined. Results indicated a statistically significant improvement (p < 0.05) in agility2 (p ¼ 0.06), power2 (p ¼ 0.05), local muscular endurance1 (p ¼ 0.01) & 3 (p ¼ 0.01) and dynamic balance (p < 0.01). Likewise, forwards and backs also showed statistically significant improvements (p < 0.05) per positional groups. Therefore, a rhythmic movement intervention has the potential to improve rugby-specific bio-motor skills and furthermore, improve positional specific skills should it be designed with positional groups in mind. Future studies should investigate, not only the effect of rhythmic movement on improving specific rugby bio-motor skills, but the potential of its application as an alternative training method during off- season (or detraining phases) or as a recovery method.

Keywords: agility, dance, dynamic balance, flexibility, local muscular endurance, power, training

Procedia PDF Downloads 56
325 A Power Management System for Indoor Micro-Drones in GPS-Denied Environments

Authors: Yendo Hu, Xu-Yu Wu, Dylan Oh

Abstract:

GPS-Denied drones open the possibility of indoor applications, including dynamic arial surveillance, inspection, safety enforcement, and discovery. Indoor swarming further enhances these applications in accuracy, robustness, operational time, and coverage. For micro-drones, power management becomes a critical issue, given the battery payload restriction. This paper proposes an application enabling battery replacement solution that extends the micro-drone active phase without human intervention. First, a framework to quantify the effectiveness of a power management solution for a drone fleet is proposed. The operation-to-non-operation ratio, ONR, gives one a quantitative benchmark to measure the effectiveness of a power management solution. Second, a survey was carried out to evaluate the ONR performance for the various solutions. Third, through analysis, this paper proposes a solution tailored to the indoor micro-drone, suitable for swarming applications. The proposed automated battery replacement solution, along with a modified micro-drone architecture, was implemented along with the associated micro-drone. Fourth, the system was tested and compared with the various solutions within the industry. Results show that the proposed solution achieves an ONR value of 31, which is a 1-fold improvement of the best alternative option. The cost analysis shows a manufacturing cost of $25, which makes this approach viable for cost-sensitive markets (e.g., consumer). Further challenges remain in the area of drone design for automated battery replacement, landing pad/drone production, high-precision landing control, and ONR improvements.

Keywords: micro-drone, battery swap, battery replacement, battery recharge, landing pad, power management

Procedia PDF Downloads 102
324 A Validated High-Performance Liquid Chromatography-UV Method for Determination of Malondialdehyde-Application to Study in Chronic Ciprofloxacin Treated Rats

Authors: Anil P. Dewani, Ravindra L. Bakal, Anil V. Chandewar

Abstract:

Present work demonstrates the applicability of high-performance liquid chromatography (HPLC) with UV detection for the determination of malondialdehyde as malondialdehyde-thiobarbituric acid complex (MDA-TBA) in-vivo in rats. The HPLC-UV method for MDA-TBA was achieved by isocratic mode on a reverse-phase C18 column (250mm×4.6mm) at a flow rate of 1.0mLmin−1 followed by UV detection at 278 nm. The chromatographic conditions were optimized by varying the concentration and pH followed by changes in percentage of organic phase optimal mobile phase consisted of mixture of water (0.2% Triethylamine pH adjusted to 2.3 by ortho-phosphoric acid) and acetonitrile in ratio (80:20 % v/v). The retention time of MDA-TBA complex was 3.7 min. The developed method was sensitive as limit of detection and quantification (LOD and LOQ) for MDA-TBA complex were (standard deviation and slope of calibration curve) 110 ng/ml and 363 ng/ml respectively. The method was linear for MDA spiked in plasma and subjected to derivatization at concentrations ranging from 100 to 1000 ng/ml. The precision of developed method measured in terms of relative standard deviations for intra-day and inter-day studies was 1.6–5.0% and 1.9–3.6% respectively. The HPLC method was applied for monitoring MDA levels in rats subjected to chronic treatment of ciprofloxacin (CFL) (5mg/kg/day) for 21 days. Results were compared by findings in control group rats. Mean peak areas of both study groups was subjected for statistical treatment to unpaired student t-test to find p-values. The p value was < 0.001 indicating significant results and suggesting increased MDA levels in rats subjected to chronic treatment of CFL of 21 days.

Keywords: MDA, TBA, ciprofloxacin, HPLC-UV

Procedia PDF Downloads 314
323 Machine Learning Classification of Fused Sentinel-1 and Sentinel-2 Image Data Towards Mapping Fruit Plantations in Highly Heterogenous Landscapes

Authors: Yingisani Chabalala, Elhadi Adam, Khalid Adem Ali

Abstract:

Mapping smallholder fruit plantations using optical data is challenging due to morphological landscape heterogeneity and crop types having overlapped spectral signatures. Furthermore, cloud covers limit the use of optical sensing, especially in subtropical climates where they are persistent. This research assessed the effectiveness of Sentinel-1 (S1) and Sentinel-2 (S2) data for mapping fruit trees and co-existing land-use types by using support vector machine (SVM) and random forest (RF) classifiers independently. These classifiers were also applied to fused data from the two sensors. Feature ranks were extracted using the RF mean decrease accuracy (MDA) and forward variable selection (FVS) to identify optimal spectral windows to classify fruit trees. Based on RF MDA and FVS, the SVM classifier resulted in relatively high classification accuracy with overall accuracy (OA) = 0.91.6% and kappa coefficient = 0.91% when applied to the fused satellite data. Application of SVM to S1, S2, S2 selected variables and S1S2 fusion independently produced OA = 27.64, Kappa coefficient = 0.13%; OA= 87%, Kappa coefficient = 86.89%; OA = 69.33, Kappa coefficient = 69. %; OA = 87.01%, Kappa coefficient = 87%, respectively. Results also indicated that the optimal spectral bands for fruit tree mapping are green (B3) and SWIR_2 (B10) for S2, whereas for S1, the vertical-horizontal (VH) polarization band. Including the textural metrics from the VV channel improved crop discrimination and co-existing land use cover types. The fusion approach proved robust and well-suited for accurate smallholder fruit plantation mapping.

Keywords: smallholder agriculture, fruit trees, data fusion, precision agriculture

Procedia PDF Downloads 41
322 Defining Death and Dying in Relation to Information Technology and Advances in Biomedicine

Authors: Evangelos Koumparoudis

Abstract:

The definition of death is a deep philosophical question, and no single meaning can be ascribed to it. This essay focuses on the ontological, epistemological, and ethical aspects of death and dying in view of technological progress in information technology and biomedicine. It starts with the ad hoc 1968 Harvard committee that proposed that the criterion for the definition of death be irreversible coma and then refers to the debate over the whole brain death formula, emphasizing the integrated function of the organism and higher brain formula, taking consciousness and personality as essential human characteristics. It follows with the contribution of information technology in personalized and precision medicine and anti-aging measures aimed at life prolongation. It also touches on the possibility of the creation of human-machine hybrids and how this raises ontological and ethical issues that concern the “cyborgization” of human beings and the conception of the organism and personhood based on a post/transhumanist essence, and, furthermore, if sentient AI capable of autonomous decision-making that might even surpass human intelligence (singularity, superintelligence) deserves moral or legal personhood. Finally, there is the question as to whether death and dying should be redefined at a transcendent level, which is reinforced by already-existing technologies of “virtual after-” life and the possibility of uploading human minds. In the last section, I refer to the current (and future) applications of nanomedicine in diagnostics, therapeutics, implants, and tissue engineering as well as the aspiration to “immortality” by cryonics. The definition of death is reformulated since age and disease elimination may be realized, and the criterion of irreversibility may be challenged.

Keywords: death, posthumanism, infomedicine, nanomedicine, cryonics

Procedia PDF Downloads 58
321 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection

Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew

Abstract:

The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.

Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.

Procedia PDF Downloads 39
320 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence

Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno

Abstract:

Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.

Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index

Procedia PDF Downloads 160
319 An Enhanced Approach in Validating Analytical Methods Using Tolerance-Based Design of Experiments (DoE)

Authors: Gule Teri

Abstract:

The effective validation of analytical methods forms a crucial component of pharmaceutical manufacturing. However, traditional validation techniques can occasionally fail to fully account for inherent variations within datasets, which may result in inconsistent outcomes. This deficiency in validation accuracy is particularly noticeable when quantifying low concentrations of active pharmaceutical ingredients (APIs), excipients, or impurities, introducing a risk to the reliability of the results and, subsequently, the safety and effectiveness of the pharmaceutical products. In response to this challenge, we introduce an enhanced, tolerance-based Design of Experiments (DoE) approach for the validation of analytical methods. This approach distinctly measures variability with reference to tolerance or design margins, enhancing the precision and trustworthiness of the results. This method provides a systematic, statistically grounded validation technique that improves the truthfulness of results. It offers an essential tool for industry professionals aiming to guarantee the accuracy of their measurements, particularly for low-concentration components. By incorporating this innovative method, pharmaceutical manufacturers can substantially advance their validation processes, subsequently improving the overall quality and safety of their products. This paper delves deeper into the development, application, and advantages of this tolerance-based DoE approach and demonstrates its effectiveness using High-Performance Liquid Chromatography (HPLC) data for verification. This paper also discusses the potential implications and future applications of this method in enhancing pharmaceutical manufacturing practices and outcomes.

Keywords: tolerance-based design, design of experiments, analytical method validation, quality control, biopharmaceutical manufacturing

Procedia PDF Downloads 70
318 Corrosion Response of Friction Stir Processed Mg-Zn-Zr-RE Alloy

Authors: Vasanth C. Shunmugasamy, Bilal Mansoor

Abstract:

Magnesium alloys are increasingly being considered for structural systems across different industrial sectors, including precision components of biomedical devices, owing to their high specific strength, stiffness and biodegradability. However, Mg alloys exhibit a high corrosion rate that restricts their application as a biomaterial. For safe use as biomaterial, it is essential to control their corrosion rates. Mg alloy corrosion is influenced by several factors, such as grain size, precipitates and texture. In Mg alloys, microgalvanic coupling between the α-Mg matrix and secondary precipitates can exist, which results in an increased corrosion rate. The present research addresses this challenge by engineering the microstructure of a biodegradable Mg–Zn–RE–Zr alloy by friction stir processing (FSP), a severe plastic deformation process. The FSP-processed Mg alloys showed improved corrosion resistance and mechanical properties. FSPed Mg alloy showed refined grains, a strong basal texture and broken and uniformly distributed secondary precipitates in the stir zone. Mg, alloy base material, exposed to In vitro corrosion medium showed micro galvanic coupling between precipitate and matrix, resulting in the unstable passive layer. However, FS processed alloy showed uniform corrosion owing to stable surface film formation. The stable surface film is attributed to refined grains, preferred texture and distribution of precipitates. The research results show promising potential for Mg alloy to be developed as a biomaterial.

Keywords: biomaterials, severe plastic deformation, magnesium alloys, corrosion

Procedia PDF Downloads 25
317 Count of Trees in East Africa with Deep Learning

Authors: Nubwimana Rachel, Mugabowindekwe Maurice

Abstract:

Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.

Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization

Procedia PDF Downloads 54
316 Advancements in Laser Welding Process: A Comprehensive Model for Predictive Geometrical, Metallurgical, and Mechanical Characteristics

Authors: Seyedeh Fatemeh Nabavi, Hamid Dalir, Anooshiravan Farshidianfar

Abstract:

Laser welding is pivotal in modern manufacturing, offering unmatched precision, speed, and efficiency. Its versatility in minimizing heat-affected zones, seamlessly joining dissimilar materials, and working with various metals makes it indispensable for crafting intricate automotive components. Integration into automated systems ensures consistent delivery of high-quality welds, thereby enhancing overall production efficiency. Noteworthy are the safety benefits of laser welding, including reduced fumes and consumable materials, which align with industry standards and environmental sustainability goals. As the automotive sector increasingly demands advanced materials and stringent safety and quality standards, laser welding emerges as a cornerstone technology. A comprehensive model encompassing thermal dynamic and characteristics models accurately predicts geometrical, metallurgical, and mechanical aspects of the laser beam welding process. Notably, Model 2 showcases exceptional accuracy, achieving remarkably low error rates in predicting primary and secondary dendrite arm spacing (PDAS and SDAS). These findings underscore the model's reliability and effectiveness, providing invaluable insights and predictive capabilities crucial for optimizing welding processes and ensuring superior productivity, efficiency, and quality in the automotive industry.

Keywords: laser welding process, geometrical characteristics, mechanical characteristics, metallurgical characteristics, comprehensive model, thermal dynamic

Procedia PDF Downloads 40
315 Application of the Finite Window Method to a Time-Dependent Convection-Diffusion Equation

Authors: Raoul Ouambo Tobou, Alexis Kuitche, Marcel Edoun

Abstract:

The FWM (Finite Window Method) is a new numerical meshfree technique for solving problems defined either in terms of PDEs (Partial Differential Equation) or by a set of conservation/equilibrium laws. The principle behind the FWM is that in such problem each element of the concerned domain is interacting with its neighbors and will always try to adapt to keep in equilibrium with respect to those neighbors. This leads to a very simple and robust problem solving scheme, well suited for transfer problems. In this work, we have applied the FWM to an unsteady scalar convection-diffusion equation. Despite its simplicity, it is well known that convection-diffusion problems can be challenging to be solved numerically, especially when convection is highly dominant. This has led researchers to set the scalar convection-diffusion equation as a benchmark one used to analyze and derive the required conditions or artifacts needed to numerically solve problems where convection and diffusion occur simultaneously. We have shown here that the standard FWM can be used to solve convection-diffusion equations in a robust manner as no adjustments (Upwinding or Artificial Diffusion addition) were required to obtain good results even for high Peclet numbers and coarse space and time steps. A comparison was performed between the FWM scheme and both a first order implicit Finite Volume Scheme (Upwind scheme) and a third order implicit Finite Volume Scheme (QUICK Scheme). The results of the comparison was that for equal space and time grid spacing, the FWM yields a much better precision than the used Finite Volume schemes, all having similar computational cost and conditioning number.

Keywords: Finite Window Method, Convection-Diffusion, Numerical Technique, Convergence

Procedia PDF Downloads 325
314 Dynamic Corrosion Prevention through Magneto-Responsive Nanostructure with Controllable Hydrophobicity

Authors: Anne McCarthy, Anna Kim, Yin Song, Kyoo Jo, Donald Cropek, Sungmin Hong

Abstract:

Corrosion prevention remains an indispensable concern across a spectrum of industries, demanding inventive and adaptable methodologies to effectively tackle the ever-evolving obstacles presented by corrosive surroundings. This abstract introduces a pioneering approach to corrosion prevention that amalgamates the distinct attributes of magneto-responsive polymers with finely adjustable hydrophobicity inspired by the structure of cicada wings, effectively deterring bacterial proliferation and biofilm formation. The proposed strategy entails the creation of an innovative array of magneto-responsive nanostructures endowed with the capacity to dynamically modulate their hydrophobic characteristics. This dynamic control over hydrophobicity facilitates active repulsion of water and corrosive agents on demand. Additionally, the cyclic motion generated by magnetic activation prevents the biofilms formation and rejection. Thus, the synergistic interplay between magneto-active nanostructures and hydrophobicity manipulation establishes a versatile defensive mechanism against diverse corrosive agents. This study introduces a novel method for corrosion prevention, harnessing the advantages of magneto-active nanostructures and the precision of hydrophobicity adjustment, resulting in water-repellency, effective biofilm removal, and offering a promising solution to handle corrosion-related challenges. We believe that the combined effect will significantly contribute to extending asset lifespan, improving safety, and reducing maintenance costs in the face of corrosion threats.

Keywords: magneto-active material, nanoimprinting, corrosion prevention, hydrophobicity

Procedia PDF Downloads 55
313 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest

Procedia PDF Downloads 178
312 AgriInnoConnect Pro System Using Iot and Firebase Console

Authors: Amit Barde, Dipali Khatave, Vaishali Savale, Atharva Chavan, Sapna Wagaj, Aditya Jilla

Abstract:

AgriInnoConnect Pro is an advanced agricultural automation system designed to enhance irrigation efficiency and overall farm management through IoT technology. Using MIT App Inventor, Telegram, Arduino IDE, and Firebase Console, it provides a user-friendly interface for farmers. Key hardware includes soil moisture sensors, DHT11 sensors, a 12V motor, a solenoid valve, a stepdown transformer, Smart Fencing, and AC switches. The system operates in automatic and manual modes. In automatic mode, the ESP32 microcontroller monitors soil moisture and autonomously controls irrigation to optimize water usage. In manual mode, users can control the irrigation motor via a mobile app. Telegram bots enable remote operation of the solenoid valve and electric fencing, enhancing farm security. Additionally, the system upgrades conventional devices to smart ones using AC switches, broadening automation capabilities. AgriInnoConnect Pro aims to improve farm productivity and resource management, addressing the critical need for sustainable water conservation and providing a comprehensive solution for modern farm management. The integration of smart technologies in AgriInnoConnect Pro ensures precision farming practices, promoting efficient resource allocation and sustainable agricultural development.

Keywords: agricultural automation, IoT, soil moisture sensor, ESP32, MIT app inventor, telegram bot, smart farming, remote control, firebase console

Procedia PDF Downloads 22
311 Study on the Process of Detumbling Space Target by Laser

Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming

Abstract:

The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.

Keywords: detumbling, laser ablation drive, space target, space debris remove

Procedia PDF Downloads 75
310 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 272
309 Real-Time Generative Architecture for Mesh and Texture

Authors: Xi Liu, Fan Yuan

Abstract:

In the evolving landscape of physics-based machine learning (PBML), particularly within fluid dynamics and its applications in electromechanical engineering, robot vision, and robot learning, achieving precision and alignment with researchers' specific needs presents a formidable challenge. In response, this work proposes a methodology that integrates neural transformation with a modified smoothed particle hydrodynamics model for generating transformed 3D fluid simulations. This approach is useful for nanoscale science, where the unique and complex behaviors of viscoelastic medium demand accurate neurally-transformed simulations for materials understanding and manipulation. In electromechanical engineering, the method enhances the design and functionality of fluid-operated systems, particularly microfluidic devices, contributing to advancements in nanomaterial design, drug delivery systems, and more. The proposed approach also aligns with the principles of PBML, offering advantages such as multi-fluid stylization and consistent particle attribute transfer. This capability is valuable in various fields where the interaction of multiple fluid components is significant. Moreover, the application of neurally-transformed hydrodynamical models extends to manufacturing processes, such as the production of microelectromechanical systems, enhancing efficiency and cost-effectiveness. The system's ability to perform neural transfer on 3D fluid scenes using a deep learning algorithm alongside physical models further adds a layer of flexibility, allowing researchers to tailor simulations to specific needs across scientific and engineering disciplines.

Keywords: physics-based machine learning, robot vision, robot learning, hydrodynamics

Procedia PDF Downloads 60
308 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 144
307 FDX1, a Cuproptosis-Related Gene, Identified as a Potential Target for Human Ovarian Aging

Authors: Li-Te Lin, Chia-Jung Li, Kuan-Hao Tsui

Abstract:

Cuproptosis, a newly identified cell death mechanism, has attracted attention for its association with various diseases. However, the genetic interplay between cuproptosis and ovarian aging remains largely unexplored. This study aims to address this gap by analyzing datasets related to ovarian aging and cuproptosis. Spatial transcriptome analyses were conducted in the ovaries of both young and aged female mice to elucidate the role of FDX1. Comprehensive bioinformatics analyses, facilitated by R software, identified FDX1 as a potential cuproptosis-related gene with implications for ovarian aging. Clinical infertility biopsies were examined to validate these findings, showing consistent results in elderly infertile patients. Furthermore, pharmacogenomic analyses of ovarian cell lines explored the intricate association between FDX1 expression levels and sensitivity to specific small molecule drugs. Spatial transcriptome analyses revealed a significant reduction in FDX1 expression in aging ovaries, supported by consistent findings in biopsies from elderly infertile patients. Pharmacogenomic investigations indicated that modulating FDX1 could influence drug responses in ovarian-related therapies. This study pioneers the identification of FDX1 as a cuproptosis-related gene linked to ovarian aging. These findings not only contribute to understanding the mechanisms of ovarian aging but also position FDX1 as a potential diagnostic biomarker and therapeutic target. Further research may establish FDX1's pivotal role in advancing precision medicine and therapies for ovarian-related conditions.

Keywords: cuproptosis, FDX1, ovarian aging, biomarker

Procedia PDF Downloads 28
306 Participatory Testing of Precision Fertilizer Management Technologies in Mid-Hills of Nepal

Authors: Kedar Nath Nepal, Dyutiman Choudhary, Naba Raj Pandit, Yam Gahire

Abstract:

Crop fertilizer recommendations are outdated as these are based on the response trails conducted over half a century ago. Further, these recommendations were based on the response trials conducted over large geographical area ignoring the large spatial variability in indigenous nutrient supplying capacity of soils typical of most smallholder systems. Application of fertilizer following such blanket recommendation in fields with varying native nutrient supply capacity leads to under application in some places and over application in others leading to reduced nutrient-use-efficiency (NUE), loss of profitability, and increased environmental risks associated with loss of unutilized nutrient through emissions or leaching. Opportunities exist to further increase yield and profitability through a significant gain in fertilizer use efficiency with commercialization of affordable and precise application technologies. We conducted participatory trails in Maize (Zea Mays), Cauliflower (Brassica oleracea var. botrytis) and Tomato (Solanum lycopersicum) in Mid Hills of Nepal to evaluate the efficacy of Urea Deep Placement (UDP and Polymer Coated Urea (PCU);. UDP contains 46% of N having individual briquette size 2.7 gm each and PCU contains 44% of N . Both PCU and urea briquette applied at reduced amount (100 kg N/ha) during planting produced similar yields (p>0.05) compared with regular urea (200 Kg N/ha). . These fertilizers also reduced N fertilizer by 35 - 50% over government blanket recommendations. Further, PCU and urea briquette increased farmer’s net income by USD 60 to 80.

Keywords: high efficiency fertilizers, urea deep placement, briquette polymer coated urea, zea mays, brassica, lycopersicum, Nepal

Procedia PDF Downloads 165
305 Opto-Electronic Properties and Structural Phase Transition of Filled-Tetrahedral NaZnAs

Authors: R. Khenata, T. Djied, R. Ahmed, H. Baltache, S. Bin-Omran, A. Bouhemadou

Abstract:

We predict structural, phase transition as well as opto-electronic properties of the filled-tetrahedral (Nowotny-Juza) NaZnAs compound in this study. Calculations are carried out by employing the full potential (FP) linearized augmented plane wave (LAPW) plus local orbitals (lo) scheme developed within the structure of density functional theory (DFT). Exchange-correlation energy/potential (EXC/VXC) functional is treated using Perdew-Burke and Ernzerhof (PBE) parameterization for generalized gradient approximation (GGA). In addition to Trans-Blaha (TB) modified Becke-Johnson (mBJ) potential is incorporated to get better precision for optoelectronic properties. Geometry optimization is carried out to obtain the reliable results of the total energy as well as other structural parameters for each phase of NaZnAs compound. Order of the structural transitions as a function of pressure is found as: Cu2Sb type → β → α phase in our study. Our calculated electronic energy band structures for all structural phases at the level of PBE-GGA as well as mBJ potential point out; NaZnAs compound is a direct (Γ–Γ) band gap semiconductor material. However, as compared to PBE-GGA, mBJ potential approximation reproduces higher values of fundamental band gap. Regarding the optical properties, calculations of real and imaginary parts of the dielectric function, refractive index, reflectivity coefficient, absorption coefficient and energy loss-function spectra are performed over a photon energy ranging from 0.0 to 30.0 eV by polarizing incident radiation in parallel to both [100] and [001] crystalline directions.

Keywords: NaZnAs, FP-LAPW+lo, structural properties, phase transition, electronic band-structure, optical properties

Procedia PDF Downloads 421