Search results for: mean average precision
5600 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel
Authors: Joseph C. Chen, Joshua Cox
Abstract:
This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.Keywords: Taguchi Parameter Design, surface roughness, Wire EDM, dimensional accuracy
Procedia PDF Downloads 3745599 Ultra-High Precision Diamond Turning of Infrared Lenses
Authors: Khaled Abou-El-Hossein
Abstract:
The presentation will address the features of two IR convex lenses that have been manufactured using an ultra-high precision machining centre based on single-point diamond turning. The lenses are made from silicon and germanium with a radius of curvature of 500 mm. Because of the brittle nature of silicon and germanium, machining parameters were selected in such a way that ductile regime was achieved. The cutting speed was 800 rpm while the feed rate and depth cut were 20 mm/min and 20 um, respectively. Although both materials comprise a mono-crystalline microstructure and are quite similar in terms of optical properties, machining of silicon was accompanied with more difficulties in terms of form accuracy compared to germanium machining. The P-V error of the silicon profile was 0.222 um while it was only 0.055 um for the germanium lens. This could be attributed to the accelerated wear that takes place on the tool edge when turning mono-crystalline silicon. Currently, we are using other ranges of the machining parameters in order to determine their optimal range that could yield satisfactory performance in terms of form accuracy when fabricating silicon lenses.Keywords: diamond turning, optical surfaces, precision machining, surface roughness
Procedia PDF Downloads 3195598 YOLO-Based Object Detection for the Automatic Classification of Intestinal Organoids
Authors: Luana Conte, Giorgio De Nunzio, Giuseppe Raso, Donato Cascio
Abstract:
The intestinal epithelium serves as a pivotal model for studying stem cell biology and diseases such as colorectal cancer. Intestinal epithelial organoids, which replicate many in vivo features of the intestinal epithelium, are increasingly used as research models. However, manual classification of organoids is labor-intensive and prone to subjectivity, limiting scalability. In this study, we developed an automated object-detection algorithm to classify intestinal organoids in transmitted-light microscopy images. Our approach utilizes the YOLOv10 medium model (YOLO10m), a state-of-the-art object-detection algorithm, to predict and classify objects within labeled bounding boxes. The model was fine-tuned on a publicly available dataset containing 840 manually annotated images with 23,066 total annotations, averaging 28.2 annotations per image (median: 21; range: 1–137). It was trained to identify four categories: cysts, early organoids, late organoids, and spheroids, using a 90:10 train-validation split over 150 epochs. Model performance was assessed using mean average precision (mAP), precision, and recall metrics. The mAP, a standard metric ranging from 0 to 1 (with 1 indicating perfect agreement with manual labeling), was calculated at a 50% overlap threshold (mAP=0.5). Optimal performance was achieved at epoch 80, with an mAP of 0.85, precision of 0.78, and recall of 0.80 on the validation dataset. Classspecific mAP values were highest for cysts (0.87), followed by late organoids (0.83), early organoids (0.76), and spheroids (0.68). Additionally, the model demonstrated the ability to measure organoid sizes and classify them with accuracy comparable to expert scientists, while operating significantly faster. This automated pipeline represents a robust tool for large-scale, high-throughput analysis of intestinal organoids, paving the way for more efficient research in organoid biology and related fields.Keywords: intestinal organoids, object detection, YOLOv10, transmitted-light microscopy
Procedia PDF Downloads 105597 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation
Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro
Abstract:
This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.Keywords: acceptance, block size, mixed linear model, testing order, testing order
Procedia PDF Downloads 3245596 The Correlation between Education, Food Intake, Exercise, and Medication Obedience with the Average of Blood Sugar in Indonesia
Authors: Aisyah Rahmatul Laily
Abstract:
Indonesia Ministry of Health is increasing their awareness on non communicable diseases. From the top ten causes of death, two of them are non communicable diseases. Diabetes Mellitus is one of the two non communicable diseases above that have the increasing number of patient from year to year. From that problem, this research is made to determine the correlation between education, food intake, exercise, and the medication obedience with the average of blood sugar. In this research, the researchers used observational and cross-sectional studies. The sample that used in this research were 50 patients in Puskesmas Gamping I Yogyakarta who have suffered from Diabetes Mellitus in long period. The researcher doing anamnesis by using questionnaire to collect the data, then analyzed it with Chi Square to determine the correlation between each variable. The dependent variable in this research is the average of blood sugar, whereas the independent variables are education, food intake, do exercise, and the obedience of medication. The result shows a relation between education and average blood sugar level (p=0.029), a relation between food intake and average blood sugar level (p=0.009), and a relation between exercise and average blood sugar level (p=0.023). There is also a relation between the medication obedience with the average of blood sugar (p=0,002). The conclusion is that the positive correlations exist between education and average blood sugar level, between food intake and average blood sugar level, and between medication obedience and average blood sugar level.Keywords: average of blood sugar, education, exercise, food intake, medication obedience
Procedia PDF Downloads 2785595 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application
Authors: Jui-Chien Hsieh
Abstract:
Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network
Procedia PDF Downloads 1155594 Dynamic Compensation for Environmental Temperature Variation in the Coolant Refrigeration Cycle as a Means of Increasing Machine-Tool Precision
Authors: Robbie C. Murchison, Ibrahim Küçükdemiral, Andrew Cowell
Abstract:
Thermal effects are the largest source of dimensional error in precision machining, and a major proportion is caused by ambient temperature variation. The use of coolant is a primary means of mitigating these effects, but there has been limited work on coolant temperature control. This research critically explored whether CNC-machine coolant refrigeration systems adapted to actively compensate for ambient temperature variation could increase machining accuracy. Accuracy data were collected from operators’ checklists for a CNC 5-axis mill and statistically reduced to bias and precision metrics for observations of one day over a sample period of 27 days. Temperature data were collected using three USB dataloggers in ambient air, the chiller inflow, and the chiller outflow. The accuracy and temperature data were analysed using Pearson correlation, then the thermodynamics of the system were described using system identification with MATLAB. It was found that 75% of thermal error is reflected in the hot coolant temperature but that this is negligibly dependent on ambient temperature. The effect of the coolant refrigeration process on hot coolant outflow temperature was also found to be negligible. Therefore, the evidence indicated that it would not be beneficial to adapt coolant chillers to compensate for ambient temperature variation. However, it is concluded that hot coolant outflow temperature is a robust and accessible source of thermal error data which could be used for prevention strategy evaluation or as the basis of other thermal error strategies.Keywords: CNC manufacturing, machine-tool, precision machining, thermal error
Procedia PDF Downloads 915593 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models
Authors: Danielle Shackley, Yetunde Folajimi
Abstract:
As more people turn to the internet seeking health-related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores to text, ranging from positive, neutral, and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing and tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial, and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced, and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process and substituting the Naive Bayes for a deep learning neural network model.Keywords: sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model
Procedia PDF Downloads 1005592 Optimal Design for SARMA(P,Q)L Process of EWMA Control Chart
Authors: Yupaporn Areepong
Abstract:
The main goal of this paper is to study Statistical Process Control (SPC) with Exponentially Weighted Moving Average (EWMA) control chart when observations are serially-correlated. The characteristic of control chart is Average Run Length (ARL) which is the average number of samples taken before an action signal is given. Ideally, an acceptable ARL of in-control process should be enough large, so-called (ARL0). Otherwise it should be small when the process is out-of-control, so-called Average of Delay Time (ARL1) or a mean of true alarm. We find explicit formulas of ARL for EWMA control chart for Seasonal Autoregressive and Moving Average processes (SARMA) with Exponential white noise. The results of ARL obtained from explicit formula and Integral equation are in good agreement. In particular, this formulas for evaluating (ARL0) and (ARL1) be able to get a set of optimal parameters which depend on smoothing parameter (λ) and width of control limit (H) for designing EWMA chart with minimum of (ARL1).Keywords: average run length, optimal parameters, exponentially weighted moving average (EWMA), control chart
Procedia PDF Downloads 5625591 Comparison of Classical Computer Vision vs. Convolutional Neural Networks Approaches for Weed Mapping in Aerial Images
Authors: Paulo Cesar Pereira Junior, Alexandre Monteiro, Rafael da Luz Ribeiro, Antonio Carlos Sobieranski, Aldo von Wangenheim
Abstract:
In this paper, we present a comparison between convolutional neural networks and classical computer vision approaches, for the specific precision agriculture problem of weed mapping on sugarcane fields aerial images. A systematic literature review was conducted to find which computer vision methods are being used on this specific problem. The most cited methods were implemented, as well as four models of convolutional neural networks. All implemented approaches were tested using the same dataset, and their results were quantitatively and qualitatively analyzed. The obtained results were compared to a human expert made ground truth for validation. The results indicate that the convolutional neural networks present better precision and generalize better than the classical models.Keywords: convolutional neural networks, deep learning, digital image processing, precision agriculture, semantic segmentation, unmanned aerial vehicles
Procedia PDF Downloads 2635590 Kinematic Analysis of the Calf Raise Test Using a Mobile iOS Application: Validation of the Calf Raise Application
Authors: Ma. Roxanne Fernandez, Josie Athens, Balsalobre-Fernandez, Masayoshi Kubo, Kim Hébert-Losier
Abstract:
Objectives: The calf raise test (CRT) is used in rehabilitation and sports medicine to evaluate calf muscle function. For testing, individuals stand on one leg and go up on their toes and back down to volitional fatigue. The newly developed Calf Raise application (CRapp) for iOS uses computer-vision algorithms enabling objective measurement of CRT outcomes. We aimed to validate the CRapp by examining its concurrent validity and agreement levels against laboratory-based equipment and establishing its intra- and inter-rater reliability. Methods: CRT outcomes (i.e., repetitions, positive work, total height, peak height, fatigue index, and peak power) were assessed in thirteen healthy individuals (6 males, 7 females) on three occasions and both legs using the CRapp, 3D motion capture, and force plate technologies simultaneously. Data were extracted from two markers: one placed immediately below the lateral malleolus and another on the heel. Concurrent validity and agreement measures were determined using intraclass correlation coefficients (ICC₃,ₖ), typical errors expressed as coefficient of variations (CV), and Bland-Altman methods to assess biases and precision. Reliability was assessed using ICC3,1 and CV values. Results: Validity of CRapp outcomes was good to excellent across measures for both markers (mean ICC ≥0.878), with precision plots showing good agreement and precision. CV ranged from 0% (repetitions) to 33.3% (fatigue index) and were, on average better for the lateral malleolus marker. Additionally, inter- and intra-rater reliability were excellent (mean ICC ≥0.949, CV ≤5.6%). Conclusion: These results confirm the CRapp is valid and reliable within and between users for measuring CRT outcomes in healthy adults. The CRapp provides a tool to objectivise CRT outcomes in research and practice, aligning with recent advances in mobile technologies and their increased use in healthcare.Keywords: calf raise test, mobile application, validity, reliability
Procedia PDF Downloads 1675589 Influence of Human Resource Management Practices on Agricultural Employees’ Behavior
Authors: B. G. Abiona, O. E. Fapojuwo, T. Akinlawon
Abstract:
This study assessed the influence of human resource management practices on agricultural employees’ behavior. Data were collected from 75 randomly selected respondents using a well-structured questionnaire. The mean age of the employees’ was 43.2 years. Major human resource management practices that influence employees behaviors were: In-service training are given to employees on a regular basis (average value of x=3.44), management reward employees who are committed to their job (average value of x =3.41) and reward are designed to encourage wide participation and activity (average value of x=3.41). Also, major employees’ behavior include: Managers and employees’ wants to create better job performance (average value of x=3.13) and administrator provides praise and recognition for effective performance and show appreciation for special effort (average value of x=3.05). Major factors affecting employees’ behavior were: inadequate training (average value of x=2.93), inadequate local and international training (average value of x=2.87), inadequate grants for training programmes (average value of x= 2.81). A significant relationship was found between gender (χ2 = 37.204, P<0.05), educational qualification (χ2 = 59.093, P<0.05), income (r =0.122, P<0.05) and human resource management practices (r = 0.573, P< 0.05) of the respondents and employees’ behavior. Management should encourage employees who are committed to their job through awards and recognition.Keywords: human resources management, agricultural employees, behaviour research institutes, Nigeria
Procedia PDF Downloads 2565588 Rock Property Calculation for Determine Hydrocarbon Zone Based on Petrophysical Principal and Sequence Stratigraphic Correlation in Blok M
Authors: Muhammad Tarmidzi, Reza M. G. Gani, Andri Luthfi
Abstract:
The purpose of this study is to identify rock zone containing hydrocarbons with calculating rock property includes volume shale, total porosity, effective porosity and water saturation. Identification method rock property based on GR log, resistivity log, neutron log and density rock. Zoning is based on sequence stratigraphic markers that are sequence boundary (SB), transgressive surface (TS) and flooding surface (FS) which correlating ten well log in blok “M”. The results of sequence stratigraphic correlation consist of eight zone that are two LST zone, three TST zone and three HST zone. The result of rock property calculation in each zone is showing two LST zone containing hydrocarbons. LST-1 zone has average volume shale (Vsh) 25%, average total porosity (PHIT) 14%, average effective porosity (PHIE) 11% and average water saturation 0,83. LST-2 zone has average volume shale (Vsh) 19%, average total porosity (PHIT) 21%, average effective porosity (PHIE) 17% and average water saturation 0,82.Keywords: hydrocarbons zone, petrophysic, rock property, sequence stratigraphic
Procedia PDF Downloads 3305587 Content-Based Mammograms Retrieval Based on Breast Density Criteria Using Bidimensional Empirical Mode Decomposition
Authors: Sourour Khouaja, Hejer Jlassi, Nadia Feddaoui, Kamel Hamrouni
Abstract:
Most medical images, and especially mammographies, are now stored in large databases. Retrieving a desired image is considered of great importance in order to find previous similar cases diagnosis. Our method is implemented to assist radiologists in retrieving mammographic images containing breast with similar density aspect as seen on the mammogram. This is becoming a challenge seeing the importance of density criteria in cancer provision and its effect on segmentation issues. We used the BEMD (Bidimensional Empirical Mode Decomposition) to characterize the content of images and Euclidean distance measure similarity between images. Through the experiments on the MIAS mammography image database, we confirm that the results are promising. The performance was evaluated using precision and recall curves comparing query and retrieved images. Computing recall-precision proved the effectiveness of applying the CBIR in the large mammographic image databases. We found a precision of 91.2% for mammography with a recall of 86.8%.Keywords: BEMD, breast density, contend-based, image retrieval, mammography
Procedia PDF Downloads 2365586 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy
Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro
Abstract:
Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer
Procedia PDF Downloads 1365585 Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks
Authors: Jacqueline Rose T. Alipo-on, Francesca Isabelle F. Escobar, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar Al Dahoul
Abstract:
Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.Keywords: heartbeat classification, convolutional neural network, electrocardiogram signals, generative adversarial networks, long short-term memory, ResNet-50
Procedia PDF Downloads 1315584 Analysis and Performance of European Geostationary Navigation Overlay Service System in North of Algeria for GPS Single Point Positioning
Authors: Tabti Lahouaria, Kahlouche Salem, Benadda Belkacem, Beldjilali Bilal
Abstract:
The European Geostationary Navigation Overlay Service (EGNOS) provides an augmentation signal to GPS (Global Positioning System) single point positioning. Presently EGNOS provides data correction and integrity information using the GPS L1 (1575.42 MHz) frequency band. The main objective of this system is to provide a better real-time positioning precision than using GPS only. They are expected to be used with single-frequency code observations. EGNOS offers navigation performance for an open service (OS), in terms of precision and availability this performance gradually degrades as moving away from the service area. For accurate system performance, the service will become less and less available as the user moves away from the EGNOS service. The improvement in position solution is investigated using the two collocated dual frequency GPS, where no EGNOS Ranging and Integrity Monitoring Station (RIMS) exists. One of the pseudo-range was kept as GPS stand-alone and the other was corrected by EGNOS to estimate the planimetric and altimetric precision for different dates. It is found that precision in position improved significantly in the second due to EGNOS correction. The performance of EGNOS system in the north of Algeria is also investigated in terms of integrity. The results show that the horizontal protection level (HPL) value is below 18.25 meters (95%) and the vertical protection level (VPL) is below 42.22 meters (95 %). These results represent good integrity information transmitted by EGNOS for APV I service. This service is thus compliant with the aviation requirements for Approaches with Vertical Guidance (APV-I), which is characterised by 40 m HAL (horizontal alarm limit) and 50 m VAL (vertical alarm limit).Keywords: EGNOS, GPS, positioning, integrity, protection level
Procedia PDF Downloads 2275583 Economics of Precision Mechanization in Wine and Table Grape Production
Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka
Abstract:
The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes
Procedia PDF Downloads 2635582 Performance Evaluation of the CareSTART S1 Analyzer for Quantitative Point-Of-Care Measurement of Glucose-6-Phosphate Dehydrogenase Activity
Authors: Haiyoung Jung, Mi Joung Leem, Sun Hwa Lee
Abstract:
Background & Objective: Glucose-6-phosphate dehydrogenase (G6PD) deficiency is a genetic abnormality that results in an inadequate amount of G6PD, leading to increased susceptibility of red blood cells to reactive oxygen species and hemolysis. The present study aimed to evaluate the careSTARTTM S1 analyzer for measuring G6PD activity to hemoglobin (Hb) ratio. Methods: Precision for G6PD activity and hemoglobin measurement was evaluated using control materials with two levels on five repeated runs per day for five days. The analytic performance of the careSTARTTM S1 analyzer was compared with spectrophotometry in 40 patient samples. Reference ranges suggested by the manufacturer were validated in 20 healthy males and females each. Results: The careSTARTTM S1 analyzer demonstrated precision of 6.0% for low-level (14~45 U/dL) and 2.7% for high-level (60~90 U/dL) control in G6PD activity, and 1.4% in hemoglobin (7.9~16.3 u/g Hb). A comparison study of G6PD to Hb ratio between the careSTARTTM S1 analyzer and spectrophotometry showed an average difference of 29.1% with a positive bias of the careSTARTTM S1 analyzer. All normal samples from the healthy population were validated for the suggested reference range for males (≥2.19 U/g Hb) and females (≥5.83 U/g Hb). Conclusion: The careSTARTTM S1 analyzer demonstrated good analytical performance and can replace the current spectrophotometric measurement of G6PD enzyme activity. In the aspect of the management of clinical laboratories, it can be a reasonable option as a point-of-care analyzer with minimal handling of samples and reagents, in addition to the automatic calculation of the ratio of measured G6PD activity and Hb concentration, to minimize any clerical errors involved with manual calculation.Keywords: POCT, G6PD, performance evaluation, careSTART
Procedia PDF Downloads 665581 Identification of Classes of Bilinear Time Series Models
Authors: Anthony Usoro
Abstract:
In this paper, two classes of bilinear time series model are obtained under certain conditions from the general bilinear autoregressive moving average model. Bilinear Autoregressive (BAR) and Bilinear Moving Average (BMA) Models have been identified. From the general bilinear model, BAR and BMA models have been proved to exist for q = Q = 0, => j = 0, and p = P = 0, => i = 0 respectively. These models are found useful in modelling most of the economic and financial data.Keywords: autoregressive model, bilinear autoregressive model, bilinear moving average model, moving average model
Procedia PDF Downloads 4115580 Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision
Authors: Lianzhong Zhang, Chao Huang
Abstract:
Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values.Keywords: SAR, sea-land segmentation, deep learning, transformer
Procedia PDF Downloads 1855579 Establishment of Precision System for Underground Facilities Based on 3D Absolute Positioning Technology
Authors: Yonggu Jang, Jisong Ryu, Woosik Lee
Abstract:
The study aims to address the limitations of existing underground facility exploration equipment in terms of exploration depth range, relative depth measurement, data processing time, and human-centered ground penetrating radar image interpretation. The study proposed the use of 3D absolute positioning technology to develop a precision underground facility exploration system. The aim of this study is to establish a precise exploration system for underground facilities based on 3D absolute positioning technology, which can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The study developed software and hardware technologies to build the precision exploration system. The software technologies developed include absolute positioning technology, ground surface location synchronization technology of GPR exploration equipment, GPR exploration image AI interpretation technology, and integrated underground space map-based composite data processing technology. The hardware systems developed include a vehicle-type exploration system and a cart-type exploration system. The data was collected using the developed exploration system, which employs 3D absolute positioning technology. The GPR exploration images were analyzed using AI technology, and the three-dimensional location information of the explored precise underground facilities was compared to the integrated underground space map. The study successfully developed a precision underground facility exploration system based on 3D absolute positioning technology. The developed exploration system can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The system comprises software technologies that build a 3D precise DEM, synchronize the GPR sensor's ground surface 3D location coordinates, automatically analyze and detect underground facility information in GPR exploration images and improve accuracy through comparative analysis of the three-dimensional location information, and hardware systems, including a vehicle-type exploration system and a cart-type exploration system. The study's findings and technological advancements are essential for underground safety management in Korea. The proposed precision exploration system significantly contributes to establishing precise location information of underground facility information, which is crucial for underground safety management and improves the accuracy and efficiency of exploration. The study addressed the limitations of existing equipment in exploring underground facilities, proposed 3D absolute positioning technology-based precision exploration system, developed software and hardware systems for the exploration system, and contributed to underground safety management by providing precise location information. The developed precision underground facility exploration system based on 3D absolute positioning technology has the potential to provide accurate and efficient exploration of underground facilities up to a depth of 5m. The system's technological advancements contribute to the establishment of precise location information of underground facility information, which is essential for underground safety management in Korea.Keywords: 3D absolute positioning, AI interpretation of GPR exploration images, complex data processing, integrated underground space maps, precision exploration system for underground facilities
Procedia PDF Downloads 645578 An Open-Source Guidance System for an Autonomous Planter Robot in Precision Agriculture
Authors: Nardjes Hamini, Mohamed Bachir Yagoubi
Abstract:
Precision agriculture has revolutionized farming by enabling farmers to monitor their crops remotely in real-time. By utilizing technologies such as sensors, farmers can detect the state of growth, hydration levels, and nutritional status and even identify diseases affecting their crops. With this information, farmers can make informed decisions regarding irrigation, fertilization, and pesticide application. Automated agricultural tasks, such as plowing, seeding, planting, and harvesting, are carried out by autonomous robots and have helped reduce costs and increase production. Despite the advantages of precision agriculture, its high cost makes it inaccessible to small and medium-sized farms. To address this issue, this paper presents an open-source guidance system for an autonomous planter robot. The system is composed of a Raspberry Pi-type nanocomputer equipped with Wi-Fi, a GPS module, a gyroscope, and a power supply module. The accompanying application allows users to enter and calibrate maps with at least four coordinates, enabling the localized contour of the parcel to be captured. The application comprises several modules, such as the mission entry module, which traces the planting trajectory and points, and the action plan entry module, which creates an ordered list of pre-established tasks such as loading, following the plan, returning to the garage, and entering sleep mode. A remote control module enables users to control the robot manually, visualize its location on the map, and use a real-time camera. Wi-Fi coverage is provided by an outdoor access point, covering a 2km circle. This open-source system offers a low-cost alternative for small and medium-sized farms, enabling them to benefit from the advantages of precision agriculture.Keywords: autonomous robot, guidance system, low-cost, medium farms, open-source system, planter robot, precision agriculture, real-time monitoring, remote control, small farms
Procedia PDF Downloads 1125577 A Comparative Analysis of Grade Weighted Average and Comprehensive Examination Result of Non Board Passers and Board Passers
Authors: Rob Gesley Capistrano, Jasper James Isaac, Rose Mae Moralda, Therese Anne Peleo, Danica Rillo, Maria Virginia Santillian
Abstract:
One of the valuable things that shows the intelligence among individuals is the academic background specifically their Grade Weighted Average and the significant result of the Comprehensive Examination. The general objective of the researchers to this study is to determine if there is a significant difference between General Weighted Average and Comprehensive Examination Result of Psychometrician Board Passers and Non-Board Passers. The respondents of this study composed of board passers and non-board passers. The researchers used purposive sampling technique. The result utilized by using T-test Independent Sample to determine the comparison of General Weighted Average and Comprehensive Examination Result of Board Passers and Non Board Passers. At the end, it concluded that the General Weighted Average of Board Passers and Non-Board Passers shows that there is no significant difference, but the average showed a minimal variation. The Comprehensive Examination Result of Board Passers and Non-Board Passers result revealed that there is a significant difference. The performance of comprehensive examination that will test the overall knowledge of an individual and will determine whose more proficient will likely to have a higher score. The result of the comprehensive examination had an impact in the passing performance of board examination.Keywords: board passers, comprehensive examination result, grade weighted average, non board passers
Procedia PDF Downloads 1925576 Electromagnetically-Vibrated Solid-Phase Microextraction for Organic Compounds
Authors: Soo Hyung Park, Seong Beom Kim, Wontae Lee, Jin Chul Joo, Jungmin Lee, Jongsoo Choi
Abstract:
A newly-developed electromagnetically vibrated solid-phase microextraction (SPME) device for extracting nonpolar organic compounds from aqueous matrices was evaluated in terms of sorption equilibrium time, precision, and detection level relative to three other more conventional extraction techniques involving SPME, viz., static, magnetic stirring, and fiber insertion/retraction. Electromagnetic vibration at 300~420 cycles/s was found to be the most efficient extraction technique in terms of reducing sorption equilibrium time and enhancing both precision and linearity. The increased efficiency for electromagnetic vibration was attributed to a greater reduction in the thickness of the stagnant-water layer that facilitated more rapid mass transport from the aqueous matrix to the SPME fiber. Electromagnetic vibration less than 500 cycles/s also did not detrimentally impact the sustainability of the extracting performance of the SPME fiber. Therefore, electromagnetically vibrated SPME may be a more powerful tool for rapid sampling and solvent-free sample preparation relative to other more conventional extraction techniques used with SPME.Keywords: electromagnetic vibration, organic compounds, precision, solid-phase microextraction (SPME), sorption equilibrium time
Procedia PDF Downloads 2565575 Introducing Global Navigation Satellite System Capabilities into IoT Field-Sensing Infrastructures for Advanced Precision Agriculture Services
Authors: Savvas Rogotis, Nikolaos Kalatzis, Stergios Dimou-Sakellariou, Nikolaos Marianos
Abstract:
As precision holds the key for the introduction of distinct benefits in agriculture (e.g., energy savings, reduced labor costs, optimal application of inputs, improved products, and yields), it steadily becomes evident that new initiatives should focus on rendering Precision Agriculture (PA) more accessible to the average farmer. PA leverages on technologies such as the Internet of Things (IoT), earth observation, robotics and positioning systems (e.g., the Global Navigation Satellite System – GNSS - as well as individual positioning systems like GPS, Glonass, Galileo) that allow: from simple data georeferencing to optimal navigation of agricultural machinery to even more complex tasks like Variable Rate Applications. An identified customer pain point is that, from one hand, typical triangulation-based positioning systems are not accurate enough (with errors up to several meters), while on the other hand, high precision positioning systems reaching centimeter-level accuracy, are very costly (up to thousands of euros). Within this paper, a Ground-Based Augmentation System (GBAS) is introduced, that can be adapted to any existing IoT field-sensing station infrastructure. The latter should cover a minimum set of requirements, and in particular, each station should operate as a fixed, obstruction-free towards the sky, energy supplying unit. Station augmentation will allow them to function in pairs with GNSS rovers following the differential GNSS base-rover paradigm. This constitutes a key innovation element for the proposed solution that encompasses differential GNSS capabilities into an IoT field-sensing infrastructure. Integrating this kind of information supports the provision of several additional PA beneficial services such as spatial mapping, route planning, and automatic field navigation of unmanned vehicles (UVs). Right at the heart of the designed system, there is a high-end GNSS toolkit with base-rover variants and Real-Time Kinematic (RTK) capabilities. The GNSS toolkit had to tackle all availability, performance, interfacing, and energy-related challenges that are faced for a real-time, low-power, and reliable in the field operation. Specifically, in terms of performance, preliminary findings exhibit a high rover positioning precision that can even reach less than 10-centimeters. As this precision is propagated to the full dataset collection, it enables tractors, UVs, Android-powered devices, and measuring units to deal with challenging real-world scenarios. The system is validated with the help of Gaiatrons, a mature network of agro-climatic telemetry stations with presence all over Greece and beyond ( > 60.000ha of agricultural land covered) that constitutes part of “gaiasense” (www.gaiasense.gr) smart farming (SF) solution. Gaiatrons constantly monitor atmospheric and soil parameters, thus, providing exact fit to operational requirements asked from modern SF infrastructures. Gaiatrons are ultra-low-cost, compact, and energy-autonomous stations with a modular design that enables the integration of advanced GNSS base station capabilities on top of them. A set of demanding pilot demonstrations has been initiated in Stimagka, Greece, an area with a diverse geomorphological landscape where grape cultivation is particularly popular. Pilot demonstrations are in the course of validating the preliminary system findings in its intended environment, tackle all technical challenges, and effectively highlight the added-value offered by the system in action.Keywords: GNSS, GBAS, precision agriculture, RTK, smart farming
Procedia PDF Downloads 1175574 Investigation of Various Physical and Physiological Properties of Ethiopian Elite Men Distances Runners
Authors: Getaye Fisseha Gelaw
Abstract:
The purpose of this study was to investigate the key physical and physiological characteristics of 16 elite male Ethiopian national team distance runners, who have an average age of 28.1±4.3 years, a height of 175.0 ±5.6 cm, a weight of 59.1 ±3.9 kg, a BMI of 19.6 ±1.5, and training age of 10.1 ±5.1 yrs. The average weekly distance is 196.3±13.8 km, the average 10,000m time is 27:14±0.5 min sec, the average half marathon time is 59:30±0.6 min sec, the average marathon time is 2hr 03min 39sec±0.02. In addition, the average Cooper test (12-minute run test) is 4525.4±139.7 meters, and the average VO2 max is 90.8±3.1ml/kg/m. All athletes have a high profile and compete on the international label, and according to the World Athletics athletes' ranking system in 2021, 56.3% of the 16 participants were platinum label status, while the remaining 43.7 % were gold label status-completed an incremental treadmill test for the assessment of VO2peak, submaximal running, lactate threshold and test during which they ran continuously at 21 km/h. The laboratory determined VO2peak was 91.4 ± 1.7 mL/kg/min with anaerobic threshold of 74.2±1.6 mL/min/Kg and VO2max 81%. The speed at the AT is 15.9 ±0.6 Kmh and the altitude is 4,0%. The respiratory compensation RC point was reached at 88.7±1.1 mL/min/Kg and 97% of VO2 max. On RCP, the speed is 17.6 ±0.4 km/h and the altitude/slope are 5.5% percent, and the speed at Maximum effort is 19.5 ±1.5 and the elevation is 6.0%. The data also suggest that Ethiopian distance top athletes have considerably higher VO2 max values than those found in earlier research.Keywords: long-distance running, Ethiopians, VO2 max, world athletics, anthropometric
Procedia PDF Downloads 1315573 A New Bound on the Average Information Ratio of Perfect Secret-Sharing Schemes for Access Structures Based on Bipartite Graphs of Larger Girth
Authors: Hui-Chuan Lu
Abstract:
In a perfect secret-sharing scheme, a dealer distributes a secret among a set of participants in such a way that only qualified subsets of participants can recover the secret and the joint share of the participants in any unqualified subset is statistically independent of the secret. The access structure of the scheme refers to the collection of all qualified subsets. In a graph-based access structures, each vertex of a graph G represents a participant and each edge of G represents a minimal qualified subset. The average information ratio of a perfect secret-sharing scheme realizing a given access structure is the ratio of the average length of the shares given to the participants to the length of the secret. The infimum of the average information ratio of all possible perfect secret-sharing schemes realizing an access structure is called the optimal average information ratio of that access structure. We study the optimal average information ratio of the access structures based on bipartite graphs. Based on some previous results, we give a bound on the optimal average information ratio for all bipartite graphs of girth at least six. This bound is the best possible for some classes of bipartite graphs using our approach.Keywords: secret-sharing scheme, average information ratio, star covering, deduction, core cluster
Procedia PDF Downloads 3645572 Achievable Average Secrecy Rates over Bank of Parallel Independent Fading Channels with Friendly Jamming
Authors: Munnujahan Ara
Abstract:
In this paper, we investigate the effect of friendly jamming power allocation strategies on the achievable average secrecy rate over a bank of parallel fading wiretap channels. We investigate the achievable average secrecy rate in parallel fading wiretap channels subject to Rayleigh and Rician fading. The achievable average secrecy rate, due to the presence of a line-of-sight component in the jammer channel is also evaluated. Moreover, we study the detrimental effect of correlation across the parallel sub-channels, and evaluate the corresponding decrease in the achievable average secrecy rate for the various fading configurations. We also investigate the tradeoff between the transmission power and the jamming power for a fixed total power budget. Our results, which are applicable to current orthogonal frequency division multiplexing (OFDM) communications systems, shed further light on the achievable average secrecy rates over a bank of parallel fading channels in the presence of friendly jammers.Keywords: fading parallel channels, wire-tap channel, OFDM, secrecy capacity, power allocation
Procedia PDF Downloads 5145571 Investigation into Black Oxide Coating of 410 Grade Surgical Stainless Steel Using Alkaline Bath Treatment
Authors: K. K. Saju, A. R. Reghuraj
Abstract:
High reflectance of surgical instruments under bright light hinders the visual clarity during laparoscopic surgical procedures leading to loss of precision and device control and creates strain and undesired difficulties to surgeons. Majority of the surgical instruments are made of surgical grade steel. Instruments with a non reflective surface can enhance the visual clarity during precision surgeries. A conversion coating of black oxide has been successfully developed 410 grade surgical stainless steel .The characteristics of the developed coating suggests the application of this technique for developing 410 grade surgical instruments with minimal reflectance.Keywords: conversion coatings, 410 stainless steel, black oxide, reflectance
Procedia PDF Downloads 458