Search results for: results validation
36726 Time-Domain Nuclear Magnetic Resonance as a Potential Analytical Tool to Assess Thermisation in Ewe's Milk
Authors: Alessandra Pardu, Elena Curti, Marco Caredda, Alessio Dedola, Margherita Addis, Massimo Pes, Antonio Pirisi, Tonina Roggio, Sergio Uzzau, Roberto Anedda
Abstract:
Some of the artisanal cheeses products of European Countries certificated as PDO (Protected Designation of Origin) are made from raw milk. To recognise potential frauds (e.g. pasteurisation or thermisation of milk aimed at raw milk cheese production), the alkaline phosphatase (ALP) assay is currently applied only for pasteurisation, although it is known to have notable limitations for the validation of ALP enzymatic state in nonbovine milk. It is known that frauds considerably impact on customers and certificating institutions, sometimes resulting in a damage of the product image and potential economic losses for cheesemaking producers. Robust, validated, and univocal analytical methods are therefore needed to allow Food Control and Security Organisms, to recognise a potential fraud. In an attempt to develop a new reliable method to overcome this issue, Time-Domain Nuclear Magnetic Resonance (TD-NMR) spectroscopy has been applied in the described work. Daily fresh milk was analysed raw (680.00 µL in each 10-mm NMR glass tube) at least in triplicate. Thermally treated samples were also produced, by putting each NMR tube of fresh raw milk in water pre-heated at temperatures from 68°C up to 72°C and for up to 3 min, with continuous agitation, and quench-cooled to 25°C in a water and ice solution. Raw and thermally treated samples were analysed in terms of 1H T2 transverse relaxation times with a CPMG sequence (Recycle Delay: 6 s, interpulse spacing: 0.05 ms, 8000 data points) and quasi-continuous distributions of T2 relaxation times were obtained by CONTIN analysis. In line with previous data collected by high field NMR techniques, a decrease in the spin-spin relaxation constant T2 of the predominant 1H population was detected in heat-treated milk as compared to raw milk. The decrease of T2 parameter is consistent with changes in chemical exchange and diffusive phenomena, likely associated to changes in milk protein (i.e. whey proteins and casein) arrangement promoted by heat treatment. Furthermore, experimental data suggest that molecular alterations are strictly dependent on the specific heat treatment conditions (temperature/time). Such molecular variations in milk, which are likely transferred to cheese during cheesemaking, highlight the possibility to extend the TD-NMR technique directly on cheese to develop a method for assessing a fraud related to the use of a milk thermal treatment in PDO raw milk cheese. Results suggest that TDNMR assays might pave a new way to the detailed characterisation of heat treatments of milk.Keywords: cheese fraud, milk, pasteurisation, TD-NMR
Procedia PDF Downloads 24336725 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent
Authors: Faidon Kyriakou, William Dempster, David Nash
Abstract:
Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.Keywords: AAA, efficiency, finite element analysis, stent deployment
Procedia PDF Downloads 19336724 Computational Feasibility Study of a Torsional Wave Transducer for Tissue Stiffness Monitoring
Authors: Rafael Muñoz, Juan Melchor, Alicia Valera, Laura Peralta, Guillermo Rus
Abstract:
A torsional piezoelectric ultrasonic transducer design is proposed to measure shear moduli in soft tissue with direct access availability, using shear wave elastography technique. The measurement of shear moduli of tissues is a challenging problem, mainly derived from a) the difficulty of isolating a pure shear wave, given the interference of multiple waves of different types (P, S, even guided) emitted by the transducers and reflected in geometric boundaries, and b) the highly attenuating nature of soft tissular materials. An immediate application, overcoming these drawbacks, is the measurement of changes in cervix stiffness to estimate the gestational age at delivery. The design has been optimized using a finite element model (FEM) and a semi-analytical estimator of the probability of detection (POD) to determine a suitable geometry, materials and generated waves. The technique is based on the time of flight measurement between emitter and receiver, to infer shear wave velocity. Current research is centered in prototype testing and validation. The geometric optimization of the transducer was able to annihilate the compressional wave emission, generating a quite pure shear torsional wave. Currently, mechanical and electromagnetic coupling between emitter and receiver signals are being the research focus. Conclusions: the design overcomes the main described problems. The almost pure shear torsional wave along with the short time of flight avoids the possibility of multiple wave interference. This short propagation distance reduce the effect of attenuation, and allow the emission of very low energies assuring a good biological security for human use.Keywords: cervix ripening, preterm birth, shear modulus, shear wave elastography, soft tissue, torsional wave
Procedia PDF Downloads 34736723 Real-Time Monitoring of Complex Multiphase Behavior in a High Pressure and High Temperature Microfluidic Chip
Authors: Renée M. Ripken, Johannes G. E. Gardeniers, Séverine Le Gac
Abstract:
Controlling the multiphase behavior of aqueous biomass mixtures is essential when working in the biomass conversion industry. Here, the vapor/liquid equilibria (VLE) of ethylene glycol, glycerol, and xylitol were studied for temperatures between 25 and 200 °C and pressures of 1 to 10 bar. These experiments were performed in a microfluidic platform, which exhibits excellent heat transfer properties so that equilibrium is reached fast. Firstly, the saturated vapor pressure as a function of the temperature and the substrate mole fraction of the substrate was calculated using AspenPlus with a Redlich-Kwong-Soave Boston-Mathias (RKS-BM) model. Secondly, we developed a high-pressure and high-temperature microfluidic set-up for experimental validation. Furthermore, we have studied the multiphase flow pattern that occurs after the saturation temperature was achieved. A glass-silicon microfluidic device containing a 0.4 or 0.2 m long meandering channel with a depth of 250 μm and a width of 250 or 500 μm was fabricated using standard microfabrication techniques. This device was placed in a dedicated chip-holder, which includes a ceramic heater on the silicon side. The temperature was controlled and monitored by three K-type thermocouples: two were located between the heater and the silicon substrate, one to set the temperature and one to measure it, and the third one was placed in a 300 μm wide and 450 μm deep groove on the glass side to determine the heat loss over the silicon. An adjustable back pressure regulator and a pressure meter were added to control and evaluate the pressure during the experiment. Aqueous biomass solutions (10 wt%) were pumped at a flow rate of 10 μL/min using a syringe pump, and the temperature was slowly increased until the theoretical saturation temperature for the pre-set pressure was reached. First and surprisingly, a significant difference was observed between our theoretical saturation temperature and the experimental results. The experimental values were 10’s of degrees higher than the calculated ones and, in some cases, saturation could not be achieved. This discrepancy can be explained in different ways. Firstly, the pressure in the microchannel is locally higher due to both the thermal expansion of the liquid and the Laplace pressure that has to be overcome before a gas bubble can be formed. Secondly, superheating effects are likely to be present. Next, once saturation was reached, the flow pattern of the gas/liquid multiphase system was recorded. In our device, the point of nucleation can be controlled by taking advantage of the pressure drop across the channel and the accurate control of the temperature. Specifically, a higher temperature resulted in nucleation further upstream in the channel. As the void fraction increases downstream, the flow regime changes along the channel from bubbly flow to Taylor flow and later to annular flow. All three flow regimes were observed simultaneously. The findings of this study are key for the development and optimization of a microreactor for hydrogen production from biomass.Keywords: biomass conversion, high pressure and high temperature microfluidics, multiphase, phase diagrams, superheating
Procedia PDF Downloads 21836722 Caring for Children with Intellectual Disabilities in Malawi: Parental Psychological Experiences and Needs
Authors: Charles Masulani Mwale
Abstract:
Background: It is argued that 85% of children with the disability live in resource-poor countries where there are few available disability services. A majority of these children, including their parents, suffer a lot as a result of the disability and its associated stigmatization, leading to a marginalized life. These parents also experience more stress and mental health problems such as depression, compared with families of normal developing children. There is little research from Africa addressing these issues especially among parents of intellectually disabled children. WHO encourages research on the impact that child with a disability have on their family and appropriate training and support to the families so that they can promote the child’s development and well-being. This study investigated the parenting experiences, mechanisms of coping with these challenges and psychosocial needs while caring for children with intellectual disabilities in both rural and urban settings of Lilongwe and Mzuzu. Methods: This is part of a larger Mixed-methods study aimed at developing a contextualized psychosocial intervention for parents of intellectually disabled children. 16 focus group discussions and four in-depth interviews were conducted with parents in catchments areas for St John of God and Children of Blessings in Mzuzu and Lilongwe cities respectively. Ethical clearance was obtained from COMREC. Data were stored in NVivo software for easy retrieval and management. All interviews were tape-recorded, transcribed and translated into English. Note-taking was performed during all the observations. Data triangulation from the interviews, note taking and the observations were done for validation and reliability. Results: Caring for intellectually disabled children comes with a number of challenges. Parents experience stigma and discrimination; fear for the child’s future; have self-blame and guilt; get coerced by neighbors to kill the disabled child; and fear violence by and to the child. Their needs include respite relief, improved access to disability services, education on disability management and financial support. For their emotional stability, parents cope by sharing with others and turning to God while other use poor coping mechanisms like alcohol use. Discussion and Recommendation: Apart from neighbors’ coercion to eliminate the child life, the findings of this study are similar to those done in other countries like Kenya and Pakistan. It is recommended that parents get educated on disability, its causes, and management to array fears of unknown. Community education is also crucial to promote community inclusiveness and correct prevailing myths associated with disability. Disability institutions ought to intensify individual as well as group counseling services to these parents. Further studies need to be done to design culturally appropriate and specific psychosocial interventions for the parents to promote their psychological resilience.Keywords: psychological distress, intellectual disability, psychosocial interventions, mental health, psychological resilience, children
Procedia PDF Downloads 44636721 Identifying Biomarker Response Patterns to Vitamin D Supplementation in Type 2 Diabetes Using K-means Clustering: A Meta-Analytic Approach to Glycemic and Lipid Profile Modulation
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Background and Aims: This meta-analysis aimed to evaluate the effect of vitamin D supplementation on key metabolic and cardiovascular parameters, such as glycated hemoglobin (HbA1C), fasting blood sugar (FBS), low-density lipoprotein (LDL), high-density lipoprotein (HDL), systolic blood pressure (SBP), and total vitamin D levels in patients with Type 2 diabetes mellitus (T2DM). Methods: A systematic search was performed across databases, including PubMed, Scopus, Embase, Web of Science, Cochrane Library, and ClinicalTrials.gov, from January 1990 to January 2024. A total of 4,177 relevant studies were initially identified. Using an unsupervised K-means clustering algorithm, publications were grouped based on common text features. Maximum entropy classification was then applied to filter studies that matched a pre-identified training set of 139 potentially relevant articles. These selected studies were manually screened for relevance. A parallel manual selection of all initially searched studies was conducted for validation. The final inclusion of studies was based on full-text evaluation, quality assessment, and meta-regression models using random effects. Sensitivity analysis and publication bias assessments were also performed to ensure robustness. Results: The unsupervised K-means clustering algorithm grouped the patients based on their responses to vitamin D supplementation, using key biomarkers such as HbA1C, FBS, LDL, HDL, SBP, and total vitamin D levels. Two primary clusters emerged: one representing patients who experienced significant improvements in these markers and another showing minimal or no change. Patients in the cluster associated with significant improvement exhibited lower HbA1C, FBS, and LDL levels after vitamin D supplementation, while HDL and total vitamin D levels increased. The analysis showed that vitamin D supplementation was particularly effective in reducing HbA1C, FBS, and LDL within this cluster. Furthermore, BMI, weight gain, and disease duration were identified as factors that influenced cluster assignment, with patients having lower BMI and shorter disease duration being more likely to belong to the improvement cluster. Conclusion: The findings of this machine learning-assisted meta-analysis confirm that vitamin D supplementation can significantly improve glycemic control and reduce the risk of cardiovascular complications in T2DM patients. The use of automated screening techniques streamlined the process, ensuring the comprehensive evaluation of a large body of evidence while maintaining the validity of traditional manual review processes.Keywords: HbA1C, T2DM, SBP, FBS
Procedia PDF Downloads 1736720 Nucleotide Based Validation of the Endangered Plant Diospyros mespiliformis (Ebenaceae) by Evaluating Short Sequence Region of Plastid rbcL Gene
Authors: Abdullah Alaklabi, Ibrahim A. Arif, Sameera O. Bafeel, Ahmad H. Alfarhan, Anis Ahamed, Jacob Thomas, Mohammad A. Bakir
Abstract:
Diospyros mespiliformis (Hochst. ex A.DC.; Ebenaceae) is a large deciduous medicinal plant. This plant species is currently listed as endangered in Saudi Arabia. Molecular identification of this plant species based on short sequence regions (571 and 664 bp) of plastid rbcL (ribulose-1, 5-biphosphate carboxylase) gene was investigated in this study. The endangered plant specimens were collected from Al-Baha, Saudi Arabia (GPS coordinate: 19.8543987, 41.3059349). Phylogenetic tree inferred from the rbcL gene sequences showed that this species is very closely related with D. brandisiana. The close relationship was also observed among D. bejaudii, D. Philippinensis and D. releyi (≥99.7% sequence homology). The partial rbcL gene sequence region (571 bp) that was amplified by rbcL primer-pair rbcLaF-rbcLaR failed to discriminate D. mespiliformis from the closely related plant species, D. brandisiana. In contrast, primer-pair rbcL1F-rbcL724R yielded longer amplicon, discriminated the species from D. brandisiana and demonstrated nucleotide variations in 3 different sites (645G>T; 663A>C; 710C>G). Although D. mespiliformis (EU980712) and D. brandisiana (EU980656) are very closely related species (99.4%); however, studied specimen showed 100% sequence homology with D. mespiliformis and 99.6% with D. brandisiana. The present findings showed that rbcL short sequence region (664 bp) of plastid rbcL gene, amplified by primer-pair rbcL1F-rbcL724R, can be used for authenticating samples of D. mespiliforformis and may provide help in authentic identification and management process of this medicinally valuable endangered plant species.Keywords: Diospyros mespiliformis, endangered plant, identification partial rbcL
Procedia PDF Downloads 43336719 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness
Abstract:
Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb
Procedia PDF Downloads 8836718 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques
Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña
Abstract:
The automatic detection of indigenous languages in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages
Procedia PDF Downloads 1836717 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking
Authors: Noga Bregman
Abstract:
Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves
Procedia PDF Downloads 5836716 Detection of Mustard Traces in Food by an Official Food Safety Laboratory
Authors: Clara Tramuta, Lucia Decastelli, Elisa Barcucci, Sandra Fragassi, Samantha Lupi, Enrico Arletti, Melissa Bizzarri, Daniela Manila Bianchi
Abstract:
Introdution: Food allergies occurs, in the Western World, 2% of adults and up to 8% of children. The protection of allergic consumers is guaranted, in Eurrope, by Regulation (EU) No 1169/2011 of the European Parliament which governs the consumer's right to information and identifies 14 food allergens to be mandatory indicated on the label. Among these, mustard is a popular spice added to enhance the flavour and taste of foods. It is frequently present as an ingredient in spice blends, marinades, salad dressings, sausages, and other products. Hypersensitivity to mustard is a public health problem since the ingestion of even low amounts can trigger severe allergic reactions. In order to protect the allergic consumer, high performance methods are required for the detection of allergenic ingredients. Food safety laboratories rely on validated methods that detect hidden allergens in food to ensure the safety and health of allergic consumers. Here we present the test results for the validation and accreditation of a Real time PCR assay (RT-PCR: SPECIALfinder MC Mustard, Generon), for the detection of mustard traces in food. Materials and Methods. The method was tested on five classes of food matrices: bakery and pastry products (chocolate cookies), meats (ragù), ready-to-eat (mixed salad), dairy products (yogurt), grains, and milling products (rice and barley flour). Blank samples were spiked starting with the mustard samples (Sinapis Alba), lyophilized and stored at -18 °C, at a concentration of 1000 ppm. Serial dilutions were then prepared to a final concentration of 0.5 ppm, using the DNA extracted by ION Force FAST (Generon) from the blank samples. The Real Time PCR reaction was performed by RT-PCR SPECIALfinder MC Mustard (Generon), using CFX96 System (BioRad). Results. Real Time PCR showed a limit of detection (LOD) of 0.5 ppm in grains and milling products, ready-to-eat, meats, bakery, pastry products, and dairy products (range Ct 25-34). To determine the exclusivity parameter of the method, the ragù matrix was contaminated with Prunus dulcis (almonds), peanut (Arachis hypogaea), Glycine max (soy), Apium graveolens (celery), Allium cepa (onion), Pisum sativum (peas), Daucus carota (carrots), and Theobroma cacao (cocoa) and no cross-reactions were observed. Discussion. In terms of sensitivity, the Real Time PCR confirmed, even in complex matrix, a LOD of 0.5 ppm in five classes of food matrices tested; these values are compatible with the current regulatory situation that does not consider, at international level, to establish a quantitative criterion for the allergen considered in this study. The Real Time PCR SPECIALfinder kit for the detection of mustard proved to be easy to use and particularly appreciated for the rapid response times considering that the amplification and detection phase has a duration of less than 50 minutes. Method accuracy was rated satisfactory for sensitivity (100%) and specificity (100%) and was fully validated and accreditated. It was found adequate for the needs of the laboratory as it met the purpose for which it was applied. This study was funded in part within a project of the Italian Ministry of Health (IZS PLV 02/19 RC).Keywords: allergens, food, mustard, real time PCR
Procedia PDF Downloads 16836715 Experimental and Modal Determination of the State-Space Model Parameters of a Uni-Axial Shaker System for Virtual Vibration Testing
Authors: Jonathan Martino, Kristof Harri
Abstract:
In some cases, the increase in computing resources makes simulation methods more affordable. The increase in processing speed also allows real time analysis or even more rapid tests analysis offering a real tool for test prediction and design process optimization. Vibration tests are no exception to this trend. The so called ‘Virtual Vibration Testing’ offers solution among others to study the influence of specific loads, to better anticipate the boundary conditions between the exciter and the structure under test, to study the influence of small changes in the structure under test, etc. This article will first present a virtual vibration test modeling with a main focus on the shaker model and will afterwards present the experimental parameters determination. The classical way of modeling a shaker is to consider the shaker as a simple mechanical structure augmented by an electrical circuit that makes the shaker move. The shaker is modeled as a two or three degrees of freedom lumped parameters model while the electrical circuit takes the coil impedance and the dynamic back-electromagnetic force into account. The establishment of the equations of this model, describing the dynamics of the shaker, is presented in this article and is strongly related to the internal physical quantities of the shaker. Those quantities will be reduced into global parameters which will be estimated through experiments. Different experiments will be carried out in order to design an easy and practical method for the identification of the shaker parameters leading to a fully functional shaker model. An experimental modal analysis will also be carried out to extract the modal parameters of the shaker and to combine them with the electrical measurements. Finally, this article will conclude with an experimental validation of the model.Keywords: lumped parameters model, shaker modeling, shaker parameters, state-space, virtual vibration
Procedia PDF Downloads 27136714 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification
Authors: Oumaima Khlifati, Khadija Baba
Abstract:
Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.Keywords: distress pavement, hyperparameters, automatic classification, deep learning
Procedia PDF Downloads 9436713 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data
Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu
Abstract:
Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq
Procedia PDF Downloads 14336712 Linking Information Systems Capabilities for Service Quality: The Role of Customer Connection and Environmental Dynamism
Authors: Teng Teng, Christos Tsinopoulos
Abstract:
The purpose of this research is to explore the link between IS capabilities, customer connection, and quality performance in the service context, with investigation of the impact of firm’s stable and dynamic environments. The application of Information Systems (IS) has become a significant effect on contemporary service operations. Firms invest in IS with the presumption that they will facilitate operations processes so that their performance will improve. Yet, IS resources by themselves are not sufficiently 'unique' and thus, it would be more useful and theoretically relevant to focus on the processes they affect. One such organisational process, which has attracted a lot of research attention by supply chain management scholars, is the integration of customer connection, where IS-enabled customer connection enhances communication and contact processes, and with such customer resources integration comes greater success for the firm in its abilities to develop a good understanding of customer needs and set accurate customer. Nevertheless, prior studies on IS capabilities have focused on either one specific type of technology or operationalised it as a highly aggregated concept. Moreover, although conceptual frameworks have been identified to show customer integration is valuable in service provision, there is much to learn about the practices of integrating customer resources. In this research, IS capabilities have been broken down into three dimensions based on the framework of Wade and Hulland: IT for supply chain activities (ITSCA), flexible IT infrastructure (ITINF), and IT operations shared knowledge (ITOSK); and focus on their impact on operational performance of firms in services. With this background, this paper addresses the following questions: -How do IS capabilities affect the integration of customer connection and service quality? -What is the relationship between environmental dynamism and the relationship of customer connection and service quality? A survey of 156 service establishments was conducted, and the data analysed to determine the role of customer connection in mediating the effects of IS capabilities on firms’ service quality. Confirmatory factor analysis was used to check convergent validity. There is a good model fit for the structural model. Moderating effect of environmental dynamism on the relationship of customer connection and service quality is analysed. Results show that ITSCA, ITINF, and ITOSK have a positive influence on the degree of the integration of customer connection. In addition, customer connection positively related to service quality; this relationship is further emphasised when firms work in a dynamic environment. This research takes a step towards quelling concerns about the business value of IS, contributing to the development and validation of the measurement of IS capabilities in the service operations context. Additionally, it adds to the emerging body of literature linking customer connection to the operational performance of service firms. Managers of service firms should consider the strength of the mediating role of customer connection when investing in IT-related technologies and policies. Particularly, service firms developing IS capabilities should simultaneously implement processes that encourage supply chain integration.Keywords: customer connection, environmental dynamism, information systems capabilities, service quality, service supply chain
Procedia PDF Downloads 14036711 A Method for Multimedia User Interface Design for Mobile Learning
Authors: Shimaa Nagro, Russell Campion
Abstract:
Mobile devices are becoming ever more widely available, with growing functionality, and are increasingly used as an enabling technology to give students access to educational material anytime and anywhere. However, the design of educational material user interfaces for mobile devices is beset by many unresolved research issues such as those arising from emphasising the information concepts then mapping this information to appropriate media (modelling information then mapping media effectively). This report describes a multimedia user interface design method for mobile learning. The method covers specification of user requirements and information architecture, media selection to represent the information content, design for directing attention to important information, and interaction design to enhance user engagement based on Human-Computer Interaction design strategies (HCI). The method will be evaluated by three different case studies to prove the method is suitable for application to different areas / applications, these are; an application to teach about major computer networking concepts, an application to deliver a history-based topic; (after these case studies have been completed, the method will be revised to remove deficiencies and then used to develop a third case study), an application to teach mathematical principles. At this point, the method will again be revised into its final format. A usability evaluation will be carried out to measure the usefulness and effectiveness of the method. The investigation will combine qualitative and quantitative methods, including interviews and questionnaires for data collection and three case studies for validating the MDMLM method. The researcher has successfully produced the method at this point which is now under validation and testing procedures. From this point forward in the report, the researcher will refer to the method using the MDMLM abbreviation which means Multimedia Design Mobile Learning Method.Keywords: human-computer interaction, interface design, mobile learning, education
Procedia PDF Downloads 24736710 Classifying Affective States in Virtual Reality Environments Using Physiological Signals
Authors: Apostolos Kalatzis, Ashish Teotia, Vishnunarayan Girishan Prabhu, Laura Stanley
Abstract:
Emotions are functional behaviors influenced by thoughts, stimuli, and other factors that induce neurophysiological changes in the human body. Understanding and classifying emotions are challenging as individuals have varying perceptions of their environments. Therefore, it is crucial that there are publicly available databases and virtual reality (VR) based environments that have been scientifically validated for assessing emotional classification. This study utilized two commercially available VR applications (Guided Meditation VR™ and Richie’s Plank Experience™) to induce acute stress and calm state among participants. Subjective and objective measures were collected to create a validated multimodal dataset and classification scheme for affective state classification. Participants’ subjective measures included the use of the Self-Assessment Manikin, emotional cards and 9 point Visual Analogue Scale for perceived stress, collected using a Virtual Reality Assessment Tool developed by our team. Participants’ objective measures included Electrocardiogram and Respiration data that were collected from 25 participants (15 M, 10 F, Mean = 22.28 4.92). The features extracted from these data included heart rate variability components and respiration rate, both of which were used to train two machine learning models. Subjective responses validated the efficacy of the VR applications in eliciting the two desired affective states; for classifying the affective states, a logistic regression (LR) and a support vector machine (SVM) with a linear kernel algorithm were developed. The LR outperformed the SVM and achieved 93.8%, 96.2%, 93.8% leave one subject out cross-validation accuracy, precision and recall, respectively. The VR assessment tool and data collected in this study are publicly available for other researchers.Keywords: affective computing, biosignals, machine learning, stress database
Procedia PDF Downloads 14436709 Establishment and Validation of Correlation Equations to Estimate Volumetric Oxygen Mass Transfer Coefficient (KLa) from Process Parameters in Stirred-Tank Bioreactors Using Response Surface Methodology
Authors: Jantakan Jullawateelert, Korakod Haonoo, Sutipong Sananseang, Sarun Torpaiboon, Thanunthon Bowornsakulwong, Lalintip Hocharoen
Abstract:
Process scale-up is essential for the biological process to increase production capacity from bench-scale bioreactors to either pilot or commercial production. Scale-up based on constant volumetric oxygen mass transfer coefficient (KLa) is mostly used as a scale-up factor since oxygen supply is one of the key limiting factors for cell growth. However, to estimate KLa of culture vessels operated with different conditions are time-consuming since it is considerably influenced by a lot of factors. To overcome the issue, this study aimed to establish correlation equations of KLa and operating parameters in 0.5 L and 5 L bioreactor employed with pitched-blade impeller and gas sparger. Temperature, gas flow rate, agitation speed, and impeller position were selected as process parameters and equations were created using response surface methodology (RSM) based on central composite design (CCD). In addition, the effects of these parameters on KLa were also investigated. Based on RSM, second-order polynomial models for 0.5 L and 5 L bioreactor were obtained with an acceptable determination coefficient (R²) as 0.9736 and 0.9190, respectively. These models were validated, and experimental values showed differences less than 10% from the predicted values. Moreover, RSM revealed that gas flow rate is the most significant parameter while temperature and agitation speed were also found to greatly affect the KLa in both bioreactors. Nevertheless, impeller position was shown to influence KLa in only 5L system. To sum up, these modeled correlations can be used to accurately predict KLa within the specified range of process parameters of two different sizes of bioreactors for further scale-up application.Keywords: response surface methodology, scale-up, stirred-tank bioreactor, volumetric oxygen mass transfer coefficient
Procedia PDF Downloads 20936708 Potential Effects of Climate Change on Streamflow, Based on the Occurrence of Severe Floods in Kelantan, East Coasts of Peninsular Malaysia River Basin
Authors: Muhd. Barzani Gasim, Mohd. Ekhwan Toriman, Mohd. Khairul Amri Kamarudin, Azman Azid, Siti Humaira Haron, Muhammad Hafiz Md. Saad
Abstract:
Malaysia is a country in Southeast Asia that constantly exposed to flooding and landslide. The disaster has caused some troubles such loss of property, loss of life and discomfort of people involved. This problem occurs as a result of climate change leading to increased stream flow rate as a result of disruption to regional hydrological cycles. The aim of the study is to determine hydrologic processes in the east coasts of Peninsular Malaysia, especially in Kelantan Basin. Parameterized to account for the spatial and temporal variability of basin characteristics and their responses to climate variability. For hydrological modeling of the basin, the Soil and Water Assessment Tool (SWAT) model such as relief, soil type, and its use, and historical daily time series of climate and river flow rates are studied. The interpretation of Landsat map/land uses will be applied in this study. The combined of SWAT and climate models, the system will be predicted an increase in future scenario climate precipitation, increase in surface runoff, increase in recharge and increase in the total water yield. As a result, this model has successfully developed the basin analysis by demonstrating analyzing hydrographs visually, good estimates of minimum and maximum flows and severe floods observed during calibration and validation periods.Keywords: east coasts of Peninsular Malaysia, Kelantan river basin, minimum and maximum flows, severe floods, SWAT model
Procedia PDF Downloads 26236707 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 10136706 Ground Surface Temperature History Prediction Using Long-Short Term Memory Neural Network Architecture
Authors: Venkat S. Somayajula
Abstract:
Ground surface temperature history prediction model plays a vital role in determining standards for international nuclear waste management. International standards for borehole based nuclear waste disposal require paleoclimate cycle predictions on scale of a million forward years for the place of waste disposal. This research focuses on developing a paleoclimate cycle prediction model using Bayesian long-short term memory (LSTM) neural architecture operated on accumulated borehole temperature history data. Bayesian models have been previously used for paleoclimate cycle prediction based on Monte-Carlo weight method, but due to limitations pertaining model coupling with certain other prediction networks, Bayesian models in past couldn’t accommodate prediction cycle’s over 1000 years. LSTM has provided frontier to couple developed models with other prediction networks with ease. Paleoclimate cycle developed using this process will be trained on existing borehole data and then will be coupled to surface temperature history prediction networks which give endpoints for backpropagation of LSTM network and optimize the cycle of prediction for larger prediction time scales. Trained LSTM will be tested on past data for validation and then propagated for forward prediction of temperatures at borehole locations. This research will be beneficial for study pertaining to nuclear waste management, anthropological cycle predictions and geophysical featuresKeywords: Bayesian long-short term memory neural network, borehole temperature, ground surface temperature history, paleoclimate cycle
Procedia PDF Downloads 13036705 Vibration and Freeze-Thaw Cycling Tests on Fuel Cells for Automotive Applications
Authors: Gema M. Rodado, Jose M. Olavarrieta
Abstract:
Hydrogen fuel cell technologies have experienced a great boost in the last decades, significantly increasing the production of these devices for both stationary and portable (mainly automotive) applications; these are influenced by two main factors: environmental pollution and energy shortage. A fuel cell is an electrochemical device that converts chemical energy directly into electricity by using hydrogen and oxygen gases as reactive components and obtaining water and heat as byproducts of the chemical reaction. Fuel cells, specifically those of Proton Exchange Membrane (PEM) technology, are considered an alternative to internal combustion engines, mainly because of the low emissions they produce (almost zero), high efficiency and low operating temperatures (< 373 K). The introduction and use of fuel cells in the automotive market requires the development of standardized and validated procedures to test and evaluate their performance in different environmental conditions including vibrations and freeze-thaw cycles. These situations of vibration and extremely low/high temperatures can affect the physical integrity or even the excellent operation or performance of the fuel cell stack placed in a vehicle in circulation or in different climatic conditions. The main objective of this work is the development and validation of vibration and freeze-thaw cycling test procedures for fuel cell stacks that can be used in a vehicle in order to consolidate their safety, performance, and durability. In this context, different experimental tests were carried out at the facilities of the National Hydrogen Centre (CNH2). The experimental equipment used was: A vibration platform (shaker) for vibration test analysis on fuel cells in three axes directions with different vibration profiles. A walk-in climatic chamber to test the starting, operating, and stopping behavior of fuel cells under defined extreme conditions. A test station designed and developed by the CNH2 to test and characterize PEM fuel cell stacks up to 10 kWe. A 5 kWe PEM fuel cell stack in off-operation mode was used to carry out two independent experimental procedures. On the one hand, the fuel cell was subjected to a sinusoidal vibration test on the shaker in the three axes directions. It was defined by acceleration and amplitudes in the frequency range of 7 to 200 Hz for a total of three hours in each direction. On the other hand, the climatic chamber was used to simulate freeze-thaw cycles by defining a temperature range between +313 K and -243 K with an average relative humidity of 50% and a recommended ramp up and rump down of 1 K/min. The polarization curve and gas leakage rate were determined before and after the vibration and freeze-thaw tests at the fuel cell stack test station to evaluate the robustness of the stack. The results were very similar, which indicates that the tests did not affect the fuel cell stack structure and performance. The proposed procedures were verified and can be used as an initial point to perform other tests with different fuel cells.Keywords: climatic chamber, freeze-thaw cycles, PEM fuel cell, shaker, vibration tests
Procedia PDF Downloads 11736704 Person-Centered Thinking as a Fundamental Approach to Improve Quality of Life
Authors: Christiane H. Kellner, Sarah Reker
Abstract:
The UN-Convention on the Rights of Persons with Disabilities, which Germany also ratified, postulates the necessity of user-centred design, especially when it comes to evaluating the individual needs and wishes of all citizens. Therefore, a multidimensional approach is required. Based on this insight, the structure of the town-like centre in Schönbrunn - a large residential complex and service provider for persons with disabilities in the outskirts of Munich - will be remodelled to open up the community to all people as well as transform social space. This strategy should lead to more equal opportunities and open the way for a much more diverse community. The research project “Index for participation development and quality of life for persons with disabilities” (TeLe-Index, 2014-2016), which is anchored at the Technische Universität München in Munich and at the Franziskuswerk Schönbrunn supports this transformation process called “Vision 2030”. In this context, we have provided academic supervision and support for three projects (the construction of a new school, inclusive housing for children and teenagers with disabilities and the professionalization of employees using person-centred planning). Since we cannot present all the issues of the umbrella-project within the conference framework, we will be focusing on one sub-project more in-depth, namely “The Person-Centred Think Tank” [Arbeitskreis Personenzentriertes Denken; PZD]. In the context of person-centred thinking (PCT), persons with disabilities are encouraged to (re)gain or retain control of their lives through the development of new choice options and the validation of individual lifestyles. PCT should thus foster and support both participation and quality of life. The project aims to establish PCT as a fundamental approach for both employees and persons with disabilities in the institution through in-house training for the staff and, subsequently, training for users. Hence, for the academic support and supervision team, the questions arising from this venture can be summed up as follows: (1) has PCT already gained a foothold at the Franziskuswerk Schönbrunn? And (2) how does it affect the interaction with persons with disabilities and how does it influence the latter’s everyday life? According to the holistic approach described above, the target groups for this study are both the staff and the users of the institution. Initially, we planned to implement the group discussion method for both target-groups. However, in the course of a pretest with persons with intellectual disabilities, it became clear that this type of interview, with hardly any external structuring, provided only limited feedback. In contrast, when the discussions were moderated, there was more interaction and dialogue between the interlocutors. Therefore, for this target-group, we introduced structured group interviews. The insights we have obtained until now will enable us to present the intermediary results of our evaluation. We analysed and evaluated the group interviews and discussions with the help of qualitative content analysis according to Mayring in order to obtain information about users’ quality of life. We sorted out the statements relating to quality of life obtained during the group interviews into three dimensions: subjective wellbeing, self-determination and participation. Nevertheless, the majority of statements were related to subjective wellbeing and self-determination. Thus, especially the limited feedback on participation clearly demonstrates that the lives of most users do not take place beyond the confines of the institution. A number of statements highlighted the fact that PCT is anchored in the everyday interactions within the groups. However, the implementation and fostering of PCT on a broader level could not be detected and thus remain further aims of the project. The additional interviews we have planned should validate the results obtained until now and open up new perspectives.Keywords: person-centered thinking, research with persons with disabilities, residential complex and service provider, participation, self-determination.
Procedia PDF Downloads 32336703 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry
Authors: C. A. Barros, Ana P. Barroso
Abstract:
Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis
Procedia PDF Downloads 21436702 Electrochemical Biosensor for the Detection of Botrytis spp. in Temperate Legume Crops
Authors: Marzia Bilkiss, Muhammad J. A. Shiddiky, Mostafa K. Masud, Prabhakaran Sambasivam, Ido Bar, Jeremy Brownlie, Rebecca Ford
Abstract:
A greater achievement in the Integrated Disease Management (IDM) to prevent the loss would result from early diagnosis and quantitation of the causal pathogen species for accurate and timely disease control. This could significantly reduce costs to the growers and reduce any flow on impacts to the environment from excessive chemical spraying. Necrotrophic fungal disease botrytis grey mould, caused by Botrytis cinerea and Botrytis fabae, significantly reduce temperate legume yield and grain quality during favourable environmental condition in Australia and worldwide. Several immunogenic and molecular probe-type protocols have been developed for their diagnosis, but these have varying levels of species-specificity, sensitivity, and consequent usefulness within the paddock. To substantially improve speed, accuracy, and sensitivity, advanced nanoparticle-based biosensor approaches have been developed. For this, two sets of primers were designed for both Botrytis cinerea and Botrytis fabae which have shown the species specificity with initial sensitivity of two genomic copies/µl in pure fungal backgrounds using multiplexed quantitative PCR. During further validation, quantitative PCR detected 100 spores on artificially infected legume leaves. Simultaneously an electro-catalytic assay was developed for both target fungal DNA using functionalised magnetic nanoparticles. This was extremely sensitive, able to detect a single spore within a raw total plant nucleic acid extract background. We believe that the translation of this technology to the field will enable quantitative assessment of pathogen load for future accurate decision support of informed botrytis grey mould management.Keywords: biosensor, botrytis grey mould, sensitive, species specific
Procedia PDF Downloads 17436701 Classification Using Worldview-2 Imagery of Giant Panda Habitat in Wolong, Sichuan Province, China
Authors: Yunwei Tang, Linhai Jing, Hui Li, Qingjie Liu, Xiuxia Li, Qi Yan, Haifeng Ding
Abstract:
The giant panda (Ailuropoda melanoleuca) is an endangered species, mainly live in central China, where bamboos act as the main food source of wild giant pandas. Knowledge of spatial distribution of bamboos therefore becomes important for identifying the habitat of giant pandas. There have been ongoing studies for mapping bamboos and other tree species using remote sensing. WorldView-2 (WV-2) is the first high resolution commercial satellite with eight Multi-Spectral (MS) bands. Recent studies demonstrated that WV-2 imagery has a high potential in classification of tree species. The advanced classification techniques are important for utilising high spatial resolution imagery. It is generally agreed that object-based image analysis is a more desirable method than pixel-based analysis in processing high spatial resolution remotely sensed data. Classifiers that use spatial information combined with spectral information are known as contextual classifiers. It is suggested that contextual classifiers can achieve greater accuracy than non-contextual classifiers. Thus, spatial correlation can be incorporated into classifiers to improve classification results. The study area is located at Wuyipeng area in Wolong, Sichuan Province. The complex environment makes it difficult for information extraction since bamboos are sparsely distributed, mixed with brushes, and covered by other trees. Extensive fieldworks in Wuyingpeng were carried out twice. The first one was on 11th June, 2014, aiming at sampling feature locations for geometric correction and collecting training samples for classification. The second fieldwork was on 11th September, 2014, for the purposes of testing the classification results. In this study, spectral separability analysis was first performed to select appropriate MS bands for classification. Also, the reflectance analysis provided information for expanding sample points under the circumstance of knowing only a few. Then, a spatially weighted object-based k-nearest neighbour (k-NN) classifier was applied to the selected MS bands to identify seven land cover types (bamboo, conifer, broadleaf, mixed forest, brush, bare land, and shadow), accounting for spatial correlation within classes using geostatistical modelling. The spatially weighted k-NN method was compared with three alternatives: the traditional k-NN classifier, the Support Vector Machine (SVM) method and the Classification and Regression Tree (CART). Through field validation, it was proved that the classification result obtained using the spatially weighted k-NN method has the highest overall classification accuracy (77.61%) and Kappa coefficient (0.729); the producer’s accuracy and user’s accuracy achieve 81.25% and 95.12% for the bamboo class, respectively, also higher than the other methods. Photos of tree crowns were taken at sample locations using a fisheye camera, so the canopy density could be estimated. It is found that it is difficult to identify bamboo in the areas with a large canopy density (over 0.70); it is possible to extract bamboos in the areas with a median canopy density (from 0.2 to 0.7) and in a sparse forest (canopy density is less than 0.2). In summary, this study explores the ability of WV-2 imagery for bamboo extraction in a mountainous region in Sichuan. The study successfully identified the bamboo distribution, providing supporting knowledge for assessing the habitats of giant pandas.Keywords: bamboo mapping, classification, geostatistics, k-NN, worldview-2
Procedia PDF Downloads 31336700 Two-Dimensional Dynamics Motion Simulations of F1 Rare Wing-Flap
Authors: Chaitanya H. Acharya, Pavan Kumar P., Gopalakrishna Narayana
Abstract:
In the realm of aerodynamics, numerous vehicles incorporate moving components to enhance their performance. For instance, airliners deploy hydraulically operated flaps and ailerons during take-off and landing, while Formula 1 racing cars utilize hydraulic tubes and actuators for various components, including the Drag Reduction System (DRS). The DRS, consisting of a rear wing and adjustable flaps, plays a crucial role in overtaking manoeuvres. The DRS has two positions: the default position with the flaps down, providing high downforce, and the lifted position, which reduces drag, allowing for increased speed and aiding in overtaking. Swift deployment of the DRS during races is essential for overtaking competitors. The fluid flow over the rear wing flap becomes intricate during deployment, involving flow reversal and operational changes, leading to unsteady flow physics that significantly influence aerodynamic characteristics. Understanding the drag and downforce during DRS deployment is crucial for determining race outcomes. While experiments can yield accurate aerodynamic data, they can be expensive and challenging to conduct across varying speeds. Computational Fluid Dynamics (CFD) emerges as a cost-effective solution to predict drag and downforce across a range of speeds, especially with the rapid deployment of the DRS. This study employs the finite volume-based solver Ansys Fluent, incorporating dynamic mesh motions and a turbulent model to capture the complex flow phenomena associated with the moving rear wing flap. A dedicated section for the rare wing-flap is considered in the present simulations, and the aerodynamics of these sections closely resemble S1223 aerofoils. Before delving into the simulations of the rare wing-flap aerofoil, numerical results undergo validation using experimental data from an NLR flap aerofoil case, encompassing different flap angles at two distinct angles of attack was carried out. The increase in flap angle as increase in lift and drag is observed for a given angle of attack. The simulation methodology for the rare-wing-flap aerofoil case involves specific time durations before lifting the flap. During this period, drag and downforce values are determined as 330 N and 1800N, respectively. Following the flap lift, a noteworthy reduction in drag to 55 % and a decrease in downforce to 17 % are observed. This understanding is critical for making instantaneous decisions regarding the deployment of the Drag Reduction System (DRS) at specific speeds, thereby influencing the overall performance of the Formula 1 racing car. Hence, this work emphasizes the utilization of dynamic mesh motion methodology to predict the aerodynamic characteristics during the deployment of the DRS in a Formula 1 racing car.Keywords: DRS, CFD, drag, downforce, dynamics mesh motion
Procedia PDF Downloads 9536699 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis
Authors: Mehrnaz Mostafavi
Abstract:
The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans
Procedia PDF Downloads 10336698 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 2036697 Validity of a Timing System in the Alpine Ski Field: A Magnet-Based Timing System Using the Magnetometer Built into an Inertial Measurement Units
Authors: Carla Pérez-Chirinos Buxadé, Bruno Fernández-Valdés, Mónica Morral-Yepes, Sílvia Tuyà Viñas, Josep Maria Padullés Riu, Gerard Moras Feliu
Abstract:
There is a long way to explore all the possible applications inertial measurement units (IMUs) have in the sports field. The aim of this study was to evaluate the validity of a new application on the use of these wearable sensors, specifically it was to evaluate a magnet-based timing system (M-BTS) for timing gate-to-gate in an alpine ski slalom using the magnetometer embedded in an IMU. This was a validation study. The criterion validity of time measured by the M-BTS was assessed using the 95% error range against actual time obtained from photocells. The experiment was carried out with first-and second-year junior skiers performing a ski slalom on a ski training slope. Eight alpine skiers (17.4 ± 0.8 years, 176.4 ± 4.9 cm, 67.7 ± 2.0 kg, 128.8 ± 26.6 slalom FIS-Points) participated in the study. An IMU device was attached to the skier’s lower back. Skiers performed a 40-gate slalom from which four gates were assessed. The M-BTS consisted of placing four bar magnets buried into the snow surface on the inner side of each gate’s turning pole; the magnetometer built into the IMU detected the peak-shaped magnetic field when passing near the magnets at a certain speed. Four magnetic peaks were detected. The time compressed between peaks was calculated. Three inter-gate times were obtained for each system: photocells and M-BTS. The total time was defined as the time sum of the inter-gate times. The 95% error interval for the total time was 0.050 s for the ski slalom. The M-BTS is valid for timing gate-to-gate in an alpine ski slalom. Inter-gate times can provide additional data for analyzing a skier’s performance, such as asymmetries between left and right foot.Keywords: gate crossing time, inertial measurement unit, timing system, wearable sensor
Procedia PDF Downloads 184