Search results for: Verification and Validation (V&V)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1886

Search results for: Verification and Validation (V&V)

446 Forensic Investigation: The Impact of Biometric-Based Solution in Combatting Mobile Fraud

Authors: Mokopane Charles Marakalala

Abstract:

Research shows that mobile fraud has grown exponentially in South Africa during the lockdown caused by the COVID-19 pandemic. According to the South African Banking Risk Information Centre (SABRIC), fraudulent online banking and transactions resulted in a sharp increase in cybercrime since the beginning of the lockdown, resulting in a huge loss to the banking industry in South Africa. While the Financial Intelligence Centre Act, 38 of 2001, regulate financial transactions, it is evident that criminals are making use of technology to their advantage. Money-laundering ranks among the major crimes, not only in South Africa but worldwide. This paper focuses on the impact of biometric-based solutions in combatting mobile fraud at the South African Risk Information. SABRIC had the challenges of a successful mobile fraud; cybercriminals could hijack a mobile device and use it to gain access to sensitive personal data and accounts. Cybercriminals are constantly looting the depths of cyberspace in search of victims to attack. Millions of people worldwide use online banking to do their regular bank-related transactions quickly and conveniently. This was supported by the SABRIC, who regularly highlighted incidents of mobile fraud, corruption, and maladministration in SABRIC, resulting in a lack of secure their banking online; they are vulnerable to falling prey to fraud scams such as mobile fraud. Criminals have made use of digital platforms since the development of technology. In 2017, 13 438 instances involving banking apps, internet banking, and mobile banking caused the sector to suffer gross losses of more than R250,000,000. The final three parties are forced to point fingers at one another while the fraudster makes off with the money. A non-probability sampling (purposive sampling) was used in selecting these participants. These included telephone calls and virtual interviews. The results indicate that there is a relationship between remote online banking and the increase in money-laundering as the system allows transactions to take place with limited verification processes. This paper highlights the significance of considering the development of prevention mechanisms, capacity development, and strategies for both financial institutions as well as law enforcement agencies in South Africa to reduce crime such as money-laundering. The researcher recommends that strategies to increase awareness for bank staff must be harnessed through the provision of requisite training and to be provided adequate training.

Keywords: biometric-based solution, investigation, cybercrime, forensic investigation, fraud, combatting

Procedia PDF Downloads 104
445 Investigating the Effectiveness of Multilingual NLP Models for Sentiment Analysis

Authors: Othmane Touri, Sanaa El Filali, El Habib Benlahmar

Abstract:

Natural Language Processing (NLP) has gained significant attention lately. It has proved its ability to analyze and extract insights from unstructured text data in various languages. It is found that one of the most popular NLP applications is sentiment analysis which aims to identify the sentiment expressed in a piece of text, such as positive, negative, or neutral, in multiple languages. While there are several multilingual NLP models available for sentiment analysis, there is a need to investigate their effectiveness in different contexts and applications. In this study, we aim to investigate the effectiveness of different multilingual NLP models for sentiment analysis on a dataset of online product reviews in multiple languages. The performance of several NLP models, including Google Cloud Natural Language API, Microsoft Azure Cognitive Services, Amazon Comprehend, Stanford CoreNLP, spaCy, and Hugging Face Transformers are being compared. The models based on several metrics, including accuracy, precision, recall, and F1 score, are being evaluated and compared to their performance across different categories of product reviews. In order to run the study, preprocessing of the dataset has been performed by cleaning and tokenizing the text data in multiple languages. Then training and testing each model has been applied using a cross-validation approach where randomly dividing the dataset into training and testing sets and repeating the process multiple times has been used. A grid search approach to optimize the hyperparameters of each model and select the best-performing model for each category of product reviews and language has been applied. The findings of this study provide insights into the effectiveness of different multilingual NLP models for Multilingual Sentiment Analysis and their suitability for different languages and applications. The strengths and limitations of each model were identified, and recommendations for selecting the most performant model based on the specific requirements of a project were provided. This study contributes to the advancement of research methods in multilingual NLP and provides a practical guide for researchers and practitioners in the field.

Keywords: NLP, multilingual, sentiment analysis, texts

Procedia PDF Downloads 108
444 Computation and Validation of the Stress Distribution around a Circular Hole in a Slab Undergoing Plastic Deformation

Authors: Sherif D. El Wakil, John Rice

Abstract:

The aim of the current work was to employ the finite element method to model a slab, with a small hole across its width, undergoing plastic plane strain deformation. The computational model had, however, to be validated by comparing its results with those obtained experimentally. Since they were in good agreement, the finite element method can therefore be considered a reliable tool that can help gain better understanding of the mechanism of ductile failure in structural members having stress raisers. The finite element software used was ANSYS, and the PLANE183 element was utilized. It is a higher order 2-D, 8-node or 6-node element with quadratic displacement behavior. A bilinear stress-strain relationship was used to define the material properties, with constants similar to those of the material used in the experimental study. The model was run for several tensile loads in order to observe the progression of the plastic deformation region, and the stress concentration factor was determined in each case. The experimental study involved employing the visioplasticity technique, where a circular mesh (each circle was 0.5 mm in diameter, with 0.05 mm line thickness) was initially printed on the side of an aluminum slab having a small hole across its width. Tensile loading was then applied to produce a small increment of plastic deformation. Circles in the plastic region became ellipses, where the directions of the principal strains and stresses coincided with the major and minor axes of the ellipses. Next, we were able to determine the directions of the maximum and minimum shear stresses at the center of each ellipse, and the slip-line field was then constructed. We were then able to determine the stress at any point in the plastic deformation zone, and hence the stress concentration factor. The experimental results were found to be in good agreement with the analytical ones.

Keywords: finite element method to model a slab, slab undergoing plastic deformation, stress distribution around a circular hole, visioplasticity

Procedia PDF Downloads 320
443 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 121
442 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 150
441 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application

Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior

Abstract:

Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.

Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks

Procedia PDF Downloads 171
440 The Effects of Virtual Reality Technology in Maternity Delivery: A Systematic Review and Meta-Analysis

Authors: Nuo Xu, Sijing Chen

Abstract:

Background: Childbirth is considered a critical traumatic event throughout our lives, positively or negatively impacting the mother's physiology, psychology, and even the whole family. Adverse birth experiences, such as labor pain, anxiety, and fear can negatively impact the mother. Studies had shown that the immersive nature of VR can distract attention from pain and increase focus on interventions for pain relief. However, the existing studies that applied VR to maternal delivery were still in their infancy and showed disparate results, and the small sample size is not representative, so this review analyzed the effects of VR in labor, such as on maternal pain and anxiety, with a view to providing a basis for future applications. Search strategy: We searched Pubmed, Embase, Web of Science, the Cochrane Library, CINAHL, China National Knowledge Infrastructure, Wan-Fang database from the building to November 17, 2021. Selection Criteria: Randomized controlled trials (RCTs) that intervened the pregnant women aged 18-35 years with gestational >34 weeks and without complications with VR technology were contained within this review. Data Collection and Analysis: Two researchers completed the study selection, data extraction, and assessment of study quality. For quantitative data we used MD or SMD, and RR (risk ratio) for qualitative data. Random-effects model and 95% confidence interval (95% CI) were used. Main Results: 12 studies were included. Using VR could relieve pain during labor (MD=-1.81, 95% CI (-2.04, -1.57), P< 0.00001) and active period (SMD=-0.41, 95% CI (-0.68, -0.14), P= 0.003), reduce anxiety (SMD=-1.39, 95% CI (-1.99, -0.78), P< 0.00001) and improve satisfaction (RR = 1.32; 95% CI (1.10, 1.59); P = 0.003), but the effect on the duration of first (SMD=-1.12, 95% CI (-2.38, 0.13), P=0.08) and second (SMD=-0.22, 95% CI (-0.67, 0.24), P=0.35) stage of labor was not statistically significant. Conclusions: Compared with conventional care, VR technology can relieve labor pain and anxiety and improve satisfaction. However, extensive experimental validation is still needed.

Keywords: virtual reality, delivery, labor pain, anxiety, meta-analysis, systematic review

Procedia PDF Downloads 92
439 [Keynote Talk]: Production Flow Coordination on Supply Chains: Brazilian Case Studies

Authors: Maico R. Severino, Laura G. Caixeta, Nadine M. Costa, Raísa L. T. Napoleão, Éverton F. V. Valle, Diego D. Calixto, Danielle Oliveira

Abstract:

One of the biggest barriers that companies find nowadays is the coordination of production flow in their Supply Chains (SC). In this study, coordination is understood as a mechanism for incorporating the entire production channel, with everyone involved focused on achieving the same goals. Sometimes, this coordination is attempted by the use of logistics practices or production plan and control methods. No papers were found in the literature that presented the combined use of logistics practices and production plan and control methods. The main objective of this paper is to propose solutions for six case studies combining logistics practices and Ordering Systems (OS). The methodology used in this study was a conceptual model of decision making. This model contains six phases: a) the analysis the types and characteristics of relationships in the SC; b) the choice of the OS; c) the choice of the logistics practices; d) the development of alternative proposals of combined use; e) the analysis of the consistency of the chosen alternative; f) the qualitative and quantitative assessment of the impact on the coordination of the production flow and the verification of applicability of the proposal in the real case. This study was conducted on six Brazilian SC of different sectors: footwear, food and beverages, garment, sugarcane, mineral and metal mechanical. The results from this study showed that there was improvement in the coordination of the production flow through the following proposals: a) for the footwear industry the use of Period Bath Control (PBC), Quick Response (QR) and Enterprise Resource Planning (ERP); b) for the food and beverage sector firstly the use of Electronic Data Interchange (EDI), ERP, Continuous Replenishment (CR) and Drum-Buffer-Rope Order (DBR) (for situations in which the plants of both companies are distant), and secondly EDI, ERP, Milk-Run and Review System Continues (for situations in which the plants of both companies are close); c) for the garment industry the use of Collaborative Planning, Forecasting, and Replenishment (CPFR) and Constant Work-In-Process (CONWIP) System; d) for the sugarcane sector the use of EDI, ERP and CONWIP System; e) for the mineral processes industry the use of Vendor Managed Inventory (VMI), EDI and MaxMin Control System; f) for the metal mechanical sector the use of CONWIP System and Continuous Replenishment (CR). It should be emphasized that the proposals are exclusively recommended for the relationship between client and supplier studied. Therefore, it cannot be generalized to other cases. However, what can be generalized is the methodology used to choose the best practices for each case. Based on the study, it can be concluded that the combined use of OS and logistics practices enable a better coordination of flow production on SC.

Keywords: supply chain management, production flow coordination, logistics practices, ordering systems

Procedia PDF Downloads 209
438 Synthesis and Characterization of Anti-Psychotic Drugs Based DNA Aptamers

Authors: Shringika Soni, Utkarsh Jain, Nidhi Chauhan

Abstract:

Aptamers are recently discovered ~80-100 bp long artificial oligonucleotides that not only demonstrated their applications in therapeutics; it is tremendously used in diagnostic and sensing application to detect different biomarkers and drugs. Synthesizing aptamers for proteins or genomic template is comparatively feasible in laboratory, but drugs or other chemical target based aptamers require major specification and proper optimization and validation. One has to optimize all selection, amplification, and characterization steps of the end product, which is extremely time-consuming. Therefore, we performed asymmetric PCR (polymerase chain reaction) for random oligonucleotides pool synthesis, and further use them in Systematic evolution of ligands by exponential enrichment (SELEX) for anti-psychotic drugs based aptamers synthesis. Anti-psychotic drugs are major tranquilizers to control psychosis for proper cognitive functions. Though their low medical use, their misuse may lead to severe medical condition as addiction and can promote crime in social and economical impact. In this work, we have approached the in-vitro SELEX method for ssDNA synthesis for anti-psychotic drugs (in this case ‘target’) based aptamer synthesis. The study was performed in three stages, where first stage included synthesis of random oligonucleotides pool via asymmetric PCR where end product was analyzed with electrophoresis and purified for further stages. The purified oligonucleotide pool was incubated in SELEX buffer, and further partition was performed in the next stage to obtain target specific aptamers. The isolated oligonucleotides are characterized and quantified after each round of partition, and significant results were obtained. After the repetitive partition and amplification steps of target-specific oligonucleotides, final stage included sequencing of end product. We can confirm the specific sequence for anti-psychoactive drugs, which will be further used in diagnostic application in clinical and forensic set-up.

Keywords: anti-psychotic drugs, aptamer, biosensor, ssDNA, SELEX

Procedia PDF Downloads 135
437 Competitive DNA Calibrators as Quality Reference Standards (QRS™) for Germline and Somatic Copy Number Variations/Variant Allelic Frequencies Analyses

Authors: Eirini Konstanta, Cedric Gouedard, Aggeliki Delimitsou, Stefania Patera, Samuel Murray

Abstract:

Introduction: Quality reference DNA standards (QRS) for molecular testing by next-generation sequencing (NGS) are essential for accurate quantitation of copy number variations (CNV) for germline and variant allelic frequencies (VAF) for somatic analyses. Objectives: Presently, several molecular analytics for oncology patients are reliant upon quantitative metrics. Test validation and standardisation are also reliant upon the availability of surrogate control materials allowing for understanding test LOD (limit of detection), sensitivity, specificity. We have developed a dual calibration platform allowing for QRS pairs to be included in analysed DNA samples, allowing for accurate quantitation of CNV and VAF metrics within and between patient samples. Methods: QRS™ blocks up to 500nt were designed for common NGS panel targets incorporating ≥ 2 identification tags (IDTDNA.com). These were analysed upon spiking into gDNA, somatic, and ctDNA using a proprietary CalSuite™ platform adaptable to common LIMS. Results: We demonstrate QRS™ calibration reproducibility spiked to 5–25% at ± 2.5% in gDNA and ctDNA. Furthermore, we demonstrate CNV and VAF within and between samples (gDNA and ctDNA) with the same reproducibility (± 2.5%) in a clinical sample of lung cancer and HBOC (EGFR and BRCA1, respectively). CNV analytics was performed with similar accuracy using a single pair of QRS calibrators when using multiple single targeted sequencing controls. Conclusion: Dual paired QRS™ calibrators allow for accurate and reproducible quantitative analyses of CNV, VAF, intrinsic sample allele measurement, inter and intra-sample measure not only simplifying NGS analytics but allowing for monitoring clinically relevant biomarker VAF across patient ctDNA samples with improved accuracy.

Keywords: calibrator, CNV, gene copy number, VAF

Procedia PDF Downloads 153
436 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 117
435 Validation of Nutritional Assessment Scores in Prediction of Mortality and Duration of Admission in Elderly, Hospitalized Patients: A Cross-Sectional Study

Authors: Christos Lampropoulos, Maria Konsta, Vicky Dradaki, Irini Dri, Konstantina Panouria, Tamta Sirbilatze, Ifigenia Apostolou, Vaggelis Lambas, Christina Kordali, Georgios Mavras

Abstract:

Objectives: Malnutrition in hospitalized patients is related to increased morbidity and mortality. The purpose of our study was to compare various nutritional scores in order to detect the most suitable one for assessing the nutritional status of elderly, hospitalized patients and correlate them with mortality and extension of admission duration, due to patients’ critical condition. Methods: Sample population included 150 patients (78 men, 72 women, mean age 80±8.2). Nutritional status was assessed by Mini Nutritional Assessment (MNA full, short-form), Malnutrition Universal Screening Tool (MUST) and short Nutritional Appetite Questionnaire (sNAQ). Sensitivity, specificity, positive and negative predictive values and ROC curves were assessed after adjustment for the cause of current admission, a known prognostic factor according to previously applied multivariate models. Primary endpoints were mortality (from admission until 6 months afterwards) and duration of hospitalization, compared to national guidelines for closed consolidated medical expenses. Results: Concerning mortality, MNA (short-form and full) and SNAQ had similar, low sensitivity (25.8%, 25.8% and 35.5% respectively) while MUST had higher sensitivity (48.4%). In contrast, all the questionnaires had high specificity (94%-97.5%). Short-form MNA and sNAQ had the best positive predictive value (72.7% and 78.6% respectively) whereas all the questionnaires had similar negative predictive value (83.2%-87.5%). MUST had the highest ROC curve (0.83) in contrast to the rest questionnaires (0.73-0.77). With regard to extension of admission duration, all four scores had relatively low sensitivity (48.7%-56.7%), specificity (68.4%-77.6%), positive predictive value (63.1%-69.6%), negative predictive value (61%-63%) and ROC curve (0.67-0.69). Conclusion: MUST questionnaire is more advantageous in predicting mortality due to its higher sensitivity and ROC curve. None of the nutritional scores is suitable for prediction of extended hospitalization.

Keywords: duration of admission, malnutrition, nutritional assessment scores, prognostic factors for mortality

Procedia PDF Downloads 346
434 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator

Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov

Abstract:

The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.

Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator

Procedia PDF Downloads 378
433 Chemometric Regression Analysis of Radical Scavenging Ability of Kombucha Fermented Kefir-Like Products

Authors: Strahinja Kovacevic, Milica Karadzic Banjac, Jasmina Vitas, Stefan Vukmanovic, Radomir Malbasa, Lidija Jevric, Sanja Podunavac-Kuzmanovic

Abstract:

The present study deals with chemometric regression analysis of quality parameters and the radical scavenging ability of kombucha fermented kefir-like products obtained with winter savory (WS), peppermint (P), stinging nettle (SN) and wild thyme tea (WT) kombucha inoculums. Each analyzed sample was described by milk fat content (MF, %), total unsaturated fatty acids content (TUFA, %), monounsaturated fatty acids content (MUFA, %), polyunsaturated fatty acids content (PUFA, %), the ability of free radicals scavenging (RSA Dₚₚₕ, % and RSA.ₒₕ, %) and pH values measured after each hour from the start until the end of fermentation. The aim of the conducted regression analysis was to establish chemometric models which can predict the radical scavenging ability (RSA Dₚₚₕ, % and RSA.ₒₕ, %) of the samples by correlating it with the MF, TUFA, MUFA, PUFA and the pH value at the beginning, in the middle and at the end of fermentation process which lasted between 11 and 17 hours, until pH value of 4.5 was reached. The analysis was carried out applying univariate linear (ULR) and multiple linear regression (MLR) methods on the raw data and the data standardized by the min-max normalization method. The obtained models were characterized by very limited prediction power (poor cross-validation parameters) and weak statistical characteristics. Based on the conducted analysis it can be concluded that the resulting radical scavenging ability cannot be precisely predicted only on the basis of MF, TUFA, MUFA, PUFA content, and pH values, however, other quality parameters should be considered and included in the further modeling. This study is based upon work from project: Kombucha beverages production using alternative substrates from the territory of the Autonomous Province of Vojvodina, 142-451-2400/2019-03, supported by Provincial Secretariat for Higher Education and Scientific Research of AP Vojvodina.

Keywords: chemometrics, regression analysis, kombucha, quality control

Procedia PDF Downloads 143
432 Comprehensive Analysis and Optimization of Alkaline Water Electrolysis for Green Hydrogen Production: Experimental Validation, Simulation Study, and Cost Analysis

Authors: Umair Ahmed, Muhammad Bin Irfan

Abstract:

This study focuses on designing and optimization of an alkaline water electrolyser for the production of green hydrogen. The aim is to enhance the durability and efficiency of this technology while simultaneously reducing the cost associated with the production of green hydrogen. The experimental results obtained from the alkaline water electrolyser are compared with simulated results using Aspen Plus software, allowing a comprehensive analysis and evaluation. To achieve the aforementioned goals, several design and operational parameters are investigated. The electrode material, electrolyte concentration, and operating conditions are carefully selected to maximize the efficiency and durability of the electrolyser. Additionally, cost-effective materials and manufacturing techniques are explored to decrease the overall production cost of green hydrogen. The experimental setup includes a carefully designed alkaline water electrolyser, where various performance parameters (such as hydrogen production rate, current density, and voltage) are measured. These experimental results are then compared with simulated data obtained using Aspen Plus software. The simulation model is developed based on fundamental principles and validated against the experimental data. The comparison between experimental and simulated results provides valuable insight into the performance of an alkaline water electrolyser. It helps to identify the areas where improvements can be made, both in terms of design and operation, to enhance the durability and efficiency of the system. Furthermore, the simulation results allow cost analysis providing an estimate of the overall production cost of green hydrogen. This study aims to develop a comprehensive understanding of alkaline water electrolysis technology. The findings of this research can contribute to the development of more efficient and durable electrolyser technology while reducing the cost associated with this technology. Ultimately, these advancements can pave the way for a more sustainable and economically viable hydrogen economy.

Keywords: sustainable development, green energy, green hydrogen, electrolysis technology

Procedia PDF Downloads 91
431 Flow Reproduction Using Vortex Particle Methods for Wake Buffeting Analysis of Bluff Structures

Authors: Samir Chawdhury, Guido Morgenthal

Abstract:

The paper presents a novel extension of Vortex Particle Methods (VPM) where the study aims to reproduce a template simulation of complex flow field that is generated from impulsively started flow past an upstream bluff body at certain Reynolds number Re-Vibration of a structural system under upstream wake flow is often considered its governing design criteria. Therefore, the attention is given in this study especially for the reproduction of wake flow simulation. The basic methodology for the implementation of the flow reproduction requires the downstream velocity sampling from the template flow simulation; therefore, at particular distances from the upstream section the instantaneous velocity components are sampled using a series of square sampling-cells arranged vertically where each of the cell contains four velocity sampling points at its corner. Since the grid free Lagrangian VPM algorithm discretises vorticity on particle elements, the method requires transformation of the velocity components into vortex circulation, and finally the simulation of the reproduction of the template flow field by seeding these vortex circulations or particles into a free stream flow. It is noteworthy that the vortex particles have to be released into the free stream exactly at same rate of velocity sampling. Studies have been done, specifically, in terms of different sampling rates and velocity sampling positions to find their effects on flow reproduction quality. The quality assessments are mainly done, using a downstream flow monitoring profile, by comparing the characteristic wind flow profiles using several statistical turbulence measures. Additionally, the comparisons are performed using velocity time histories, snapshots of the flow fields, and the vibration of a downstream bluff section by performing wake buffeting analyses of the section under the original and reproduced wake flows. Convergence study is performed for the validation of the method. The study also describes the possibilities how to achieve flow reproductions with less computational effort.

Keywords: vortex particle method, wake flow, flow reproduction, wake buffeting analysis

Procedia PDF Downloads 312
430 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept

Authors: Ahmed El Naggar, Homyan Saleh

Abstract:

Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.

Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy

Procedia PDF Downloads 93
429 Multi Data Management Systems in a Cluster Randomized Trial in Poor Resource Setting: The Pneumococcal Vaccine Schedules Trial

Authors: Abdoullah Nyassi, Golam Sarwar, Sarra Baldeh, Mamadou S. K. Jallow, Bai Lamin Dondeh, Isaac Osei, Grant A. Mackenzie

Abstract:

A randomized controlled trial is the "gold standard" for evaluating the efficacy of an intervention. Large-scale, cluster-randomized trials are expensive and difficult to conduct, though. To guarantee the validity and generalizability of findings, high-quality, dependable, and accurate data management systems are necessary. Robust data management systems are crucial for optimizing and validating the quality, accuracy, and dependability of trial data. Regarding the difficulties of data gathering in clinical trials in low-resource areas, there is a scarcity of literature on this subject, which may raise concerns. Effective data management systems and implementation goals should be part of trial procedures. Publicizing the creative clinical data management techniques used in clinical trials should boost public confidence in the study's conclusions and encourage further replication. In the ongoing pneumococcal vaccine schedule study in rural Gambia, this report details the development and deployment of multi-data management systems and methodologies. We implemented six different data management, synchronization, and reporting systems using Microsoft Access, RedCap, SQL, Visual Basic, Ruby, and ASP.NET. Additionally, data synchronization tools were developed to integrate data from these systems into the central server for reporting systems. Clinician, lab, and field data validation systems and methodologies are the main topics of this report. Our process development efforts across all domains were driven by the complexity of research project data collected in real-time data, online reporting, data synchronization, and ways for cleaning and verifying data. Consequently, we effectively used multi-data management systems, demonstrating the value of creative approaches in enhancing the consistency, accuracy, and reporting of trial data in a poor resource setting.

Keywords: data management, data collection, data cleaning, cluster-randomized trial

Procedia PDF Downloads 28
428 Molecular Docking Analysis of Flavonoids Reveal Potential of Eriodictyol for Breast Cancer Treatment

Authors: Nicole C. Valdez, Vincent L. Borromeo, Conrad C. Chong, Ahmad F. Mazahery

Abstract:

Breast cancer is the most prevalent cancer worldwide, where the majority of cases are estrogen-receptor positive and involve 2 receptor proteins. The binding of estrogen to estrogen receptor alpha (ERα) promotes breast cancer growth, while it's binding to estrogen-receptor beta (ERβ) inhibits tumor growth. While natural products have been a promising source of chemotherapeutic agents, the challenge remains in finding a bioactive compound that specifically targets cancer cells, minimizing side effects on normal cells. Flavonoids are natural products that act as phytoestrogens and induce the same response as estrogen. They are able to compete with estrogen for binding to ERα; however, it has a higher binding affinity for ERβ. Their abundance in nature and low toxicity make them a potential candidate for breast cancer treatment. This study aimed to determine which particular flavonoids can specifically recognize ERβ and potentially be used for breast cancer treatment through molecular docking. A total of 206 flavonoids comprised of 97 isoflavones and 109 flavanones were collected from ZINC15, while the 3D structures of ERβ and ERα were obtained from Protein Data Bank. These flavonoid subclasses were chosen as they bind more strongly to ERs due to their chemical structure. The structures of the flavonoid ligands were converted using Open Babel, while the estrogen receptor protein structures were prepared using Autodock MGL Tools. The optimal binding site was found using BIOVIA Discovery Studio Visualizer before docking all flavonoids on both ERβ and ERα through Autodock Vina. Genistein is a flavonoid that exhibits anticancer effects by binding to ERβ, so its binding affinity was used as a baseline. Eriodictyol and 4”,6”-Di-O-Galloylprunin both exceeded genistein’s binding affinity for ERβ and was lower than its binding affinity for ERα. Of the two, eriodictyol was pursued due to its antitumor properties on a lung cancer cell line and on glioma cells. It is able to arrest the cell cycle at the G2/M phase by inhibiting the mTOR/PI3k/Akt cascade and is able to induce apoptosis via the PI3K/Akt/NF-kB pathway. Protein pathway and gene analysis were also conducted using ChEMBL and PANTHER and it was shown that eriodictyol might induce anticancer effects through the ROS1, CA7, KMO, and KDM1A genes which are involved in cell proliferation in breast cancer, non-small cell lung cancer, and other diseases. The high binding affinity of eriodictyol to ERβ, as well as its potential affected genes and antitumor effects, therefore, make it a candidate for the development of new breast cancer treatment. Verification through in vitro experiments such as checking the upregulation and downregulation of genes through qPCR and checking cell cycle arrest using a flow cytometry assay is recommended.

Keywords: breast cancer, estrogen receptor, flavonoid, molecular docking

Procedia PDF Downloads 89
427 Cloud Based Supply Chain Traceability

Authors: Kedar J. Mahadeshwar

Abstract:

Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.

Keywords: cloud, pharmaceutical, supply chain, tracking

Procedia PDF Downloads 529
426 Gas While Drilling (GWD) Classification in Betara Complex; An Effective Approachment to Optimize Future Candidate of Gumai Reservoir

Authors: I. Gusti Agung Aditya Surya Wibawa, Andri Syafriya, Beiruny Syam

Abstract:

Gumai Formation which acts as regional seal for Talang Akar Formation becomes one of the most prolific reservoir in South Sumatra Basin and the primary exploration target in this area. Marine conditions were eventually established during the continuation of transgression sequence leads an open marine facies deposition in Early Miocene. Marine clastic deposits where calcareous shales, claystone and siltstones interbedded with fine-grained calcareous and glauconitic sandstones are the domination of lithology which targeted as the hydrocarbon reservoir. All this time, the main objective of PetroChina’s exploration and production in Betara area is only from Lower Talang Akar Formation. Successful testing in some exploration wells which flowed gas & condensate from Gumai Formation, opened the opportunity to optimize new reservoir objective in Betara area. Limitation of conventional wireline logs data in Gumai interval is generating technical challenge in term of geological approach. A utilization of Gas While Drilling indicator initiated with the objective to determine the next Gumai reservoir candidate which capable to increase Jabung hydrocarbon discoveries. This paper describes how Gas While Drilling indicator is processed to generate potential and non-potential zone by cut-off analysis. Validation which performed by correlation and comparison with well logs, Drill Stem Test (DST), and Reservoir Performance Monitor (RPM) data succeed to observe Gumai reservoir in Betara Complex. After we integrated all of data, we are able to generate a Betara Complex potential map and overlaid with reservoir characterization distribution as a part of risk assessment in term of potential zone presence. Mud log utilization and geophysical data information successfully covered the geological challenges in this study.

Keywords: Gumai, gas while drilling, classification, reservoir, potential

Procedia PDF Downloads 356
425 Impact of Varying Malting and Fermentation Durations on Specific Chemical, Functional Properties, and Microstructural Behaviour of Pearl Millet and Sorghum Flour Using Response Surface Methodology

Authors: G. Olamiti; TK. Takalani; D. Beswa, AIO Jideani

Abstract:

The study investigated the effects of malting and fermentation times on some chemical, functional properties and microstructural behaviour of Agrigreen, Babala pearl millet cultivars and sorghum flours using response surface methodology (RSM). Central Composite Rotatable Design (CCRD) was performed on two independent variables: malting and fermentation times (h), at intervals of 24, 48, and 72, respectively. The results of dependent parameters such as pH, titratable acidity (TTA), Water absorption capacity (WAC), Oil absorption capacity (OAC), bulk density (BD), dispersibility and microstructural behaviour of the flours studied showed a significant difference in p < 0.05 upon malting and fermentation time. Babala flour exhibited a higher pH value at 4.78 at 48 h malted and 81.9 fermentation times. Agrigreen flour showed a higher TTA value at 0.159% at 81.94 h malted and 48 h fermentation times. WAC content was also higher in malted and fermented Babala flour at 2.37 ml g-1 for 81.94 h malted and 48 h fermentation time. Sorghum flour exhibited the least OAC content at 1.67 ml g-1 at 14 h malted and 48 h fermentation times. Agrigreen flour recorded the least bulk density, at 0.53 g ml-1 for 72 h malted and 24 h fermentation time. Sorghum flour exhibited a higher content of dispersibility, at 56.34%, after 24 h malted and 72 h fermented time. The response surface plots showed that increased malting and fermentation time influenced the dependent parameters. The microstructure behaviour of malting and fermentation times of pearl millet varieties and sorghum flours showed isolated, oval, spherical, or polygonal to smooth surfaces. The optimal processing conditions, such as malting and fermentation time for Agrigreen, were 32.24 h and 63.32 h; 35.18 h and 34.58 h for Babala; and 36.75 h and 47.88 h for sorghum with high desirability of 1.00. The validation of the optimum processing malting and fermentation times (h) on the dependent improved the experimented values. Food processing companies can use the study's findings to improve food processing and quality.

Keywords: Pearl millet, malting, fermentation, microstructural behaviour

Procedia PDF Downloads 74
424 Mathematical Modelling of Drying Kinetics of Cantaloupe in a Solar Assisted Dryer

Authors: Melike Sultan Karasu Asnaz, Ayse Ozdogan Dolcek

Abstract:

Crop drying, which aims to reduce the moisture content to a certain level, is a method used to extend the shelf life and prevent it from spoiling. One of the oldest food preservation techniques is open sunor shade drying. Even though this technique is the most affordable of all drying methods, there are some drawbacks such as contamination by insects, environmental pollution, windborne dust, and direct expose to weather conditions such as wind, rain, hail. However, solar dryers that provide a hygienic and controllable environment to preserve food and extend its shelf life have been developed and used to dry agricultural products. Thus, foods can be dried quickly without being affected by weather variables, and quality products can be obtained. This research is mainly devoted to investigating the modelling of drying kinetics of cantaloupe in a forced convection solar dryer. Mathematical models for the drying process should be defined to simulate the drying behavior of the foodstuff, which will greatly contribute to the development of solar dryer designs. Thus, drying experiments were conducted and replicated five times, and various data such as temperature, relative humidity, solar irradiation, drying air speed, and weight were instantly monitored and recorded. Moisture content of sliced and pretreated cantaloupe were converted into moisture ratio and then fitted against drying time for constructing drying curves. Then, 10 quasi-theoretical and empirical drying models were applied to find the best drying curve equation according to the Levenberg-Marquardt nonlinear optimization method. The best fitted mathematical drying model was selected according to the highest coefficient of determination (R²), and the mean square of the deviations (χ^²) and root mean square error (RMSE) criterial. The best fitted model was utilized to simulate a thin layer solar drying of cantaloupe, and the simulation results were compared with the experimental data for validation purposes.

Keywords: solar dryer, mathematical modelling, drying kinetics, cantaloupe drying

Procedia PDF Downloads 127
423 Revalidation and Hormonization of Existing IFCC Standardized Hepatic, Cardiac, and Thyroid Function Tests by Precison Optimization and External Quality Assurance Programs

Authors: Junaid Mahmood Alam

Abstract:

Revalidating and harmonizing clinical chemistry analytical principles and optimizing methods through quality control programs and assessments is the preeminent means to attain optimal outcome within the clinical laboratory services. Present study reports revalidation of our existing IFCC regularized analytical methods, particularly hepatic and thyroid function tests, by optimization of precision analyses and processing through external and internal quality assessments and regression determination. Parametric components of hepatic (Bilirubin ALT, γGT, ALP), cardiac (LDH, AST, Trop I) and thyroid/pituitary (T3, T4, TSH, FT3, FT4) function tests were used to validate analytical techniques on automated chemistry and immunological analyzers namely Hitachi 912, Cobas 6000 e601, Cobas c501, Cobas e411 with UV kinetic, colorimetric dry chemistry principles and Electro-Chemiluminescence immunoassay (ECLi) techniques. Process of validation and revalidation was completed with evaluating and assessing the precision analyzed Preci-control data of various instruments plotting against each other with regression analyses R2. Results showed that: Revalidation and optimization of respective parameters that were accredited through CAP, CLSI and NEQAPP assessments depicted 99.0% to 99.8% optimization, in addition to the methodology and instruments used for analyses. Regression R2 analysis of BilT was 0.996, whereas that of ALT, ALP, γGT, LDH, AST, Trop I, T3, T4, TSH, FT3, and FT4 exhibited R2 0.998, 0.997, 0.993, 0.967, 0.970, 0.980, 0.976, 0.996, 0.997, 0.997, and R2 0.990, respectively. This confirmed marked harmonization of analytical methods and instrumentations thus revalidating optimized precision standardization as per IFCC recommended guidelines. It is concluded that practices of revalidating and harmonizing the existing or any new services should be followed by all clinical laboratories, especially those associated with tertiary care hospital. This is will ensure deliverance of standardized, proficiency tested, optimized services for prompt and better patient care that will guarantee maximum patients’ confidence.

Keywords: revalidation, standardized, IFCC, CAP, harmonized

Procedia PDF Downloads 269
422 Minimization of the Abrasion Effect of Fiber Reinforced Polymer Matrix on Stainless Steel Injection Nozzle through the Application of Laser Hardening Technique

Authors: Amessalu Atenafu Gelaw, Nele Rath

Abstract:

Currently, laser hardening process is becoming among the most efficient and effective hardening technique due to its significant advantages. The source where heat is generated, the absence of cooling media, self-quenching property, less distortion nature due to localized heat input, environmental friendly behavior and less time to finish the operation are among the main benefits to adopt this technology. This day, a variety of injection machines are used in plastic, textile, electrical and mechanical industries. Due to the fast growing of composite technology, fiber reinforced polymer matrix becoming optional solution to use in these industries. Due, to the abrasion nature of fiber reinforced polymer matrix composite on the injection components, many parts are outdated before the design period. Niko, a company specialized in injection molded products, suffers from the short lifetime of the injection nozzles of the molds, due to the use of fiber reinforced and, therefore, more abrasive polymer matrix. To prolong the lifetime of these molds, hardening the susceptible component like the injecting nozzles was a must. In this paper, the laser hardening process is investigated on Unimax, a type of stainless steel. The investigation to get optimal results for the nozzle-case was performed in three steps. First, the optimal parameters for maximum possible hardenability for the investigated nozzle material is investigated on a flat sample, using experimental testing as well as thermal simulation. Next, the effect of an inclination on the maximum temperature is analyzed both by experimental testing and validation through simulation. Finally, the data combined and applied for the nozzle. This paper describes possible strategies and methods for laser hardening of the nozzle to reach hardness of at least 720 HV for the material investigated. It has been proven, that the nozzle can be laser hardened to over 900 HV with the option of even higher results when more precise positioning of the laser can be assured.

Keywords: absorptivity, fiber reinforced matrix, laser hardening, Nd:YAG laser

Procedia PDF Downloads 156
421 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 56
420 Exploring the Design of Prospective Human Immunodeficiency Virus Type 1 Reverse Transcriptase Inhibitors through a Comprehensive Approach of Quantitative Structure Activity Relationship Study, Molecular Docking, and Molecular Dynamics Simulations

Authors: Mouna Baassi, Mohamed Moussaoui, Sanchaita Rajkhowa, Hatim Soufi, Said Belaaouad

Abstract:

The objective of this paper is to address the challenging task of targeting Human Immunodeficiency Virus type 1 Reverse Transcriptase (HIV-1 RT) in the treatment of AIDS. Reverse Transcriptase inhibitors (RTIs) have limitations due to the development of Reverse Transcriptase mutations that lead to treatment resistance. In this study, a combination of statistical analysis and bioinformatics tools was adopted to develop a mathematical model that relates the structure of compounds to their inhibitory activities against HIV-1 Reverse Transcriptase. Our approach was based on a series of compounds recognized for their HIV-1 RT enzymatic inhibitory activities. These compounds were designed via software, with their descriptors computed using multiple tools. The most statistically promising model was chosen, and its domain of application was ascertained. Furthermore, compounds exhibiting comparable biological activity to existing drugs were identified as potential inhibitors of HIV-1 RT. The compounds underwent evaluation based on their chemical absorption, distribution, metabolism, excretion, toxicity properties, and adherence to Lipinski's rule. Molecular docking techniques were employed to examine the interaction between the Reverse Transcriptase (Wild Type and Mutant Type) and the ligands, including a known drug available in the market. Molecular dynamics simulations were also conducted to assess the stability of the RT-ligand complexes. Our results reveal some of the new compounds as promising candidates for effectively inhibiting HIV-1 Reverse Transcriptase, matching the potency of the established drug. This necessitates further experimental validation. This study, beyond its immediate results, provides a methodological foundation for future endeavors aiming to discover and design new inhibitors targeting HIV-1 Reverse Transcriptase.

Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation, reverse transcriptase inhibitors, HIV type 1

Procedia PDF Downloads 93
419 A Gold-Based Nanoformulation for Delivery of the CRISPR/Cas9 Ribonucleoprotein for Genome Editing

Authors: Soultana Konstantinidou, Tiziana Schmidt, Elena Landi, Alessandro De Carli, Giovanni Maltinti, Darius Witt, Alicja Dziadosz, Agnieszka Lindstaedt, Michele Lai, Mauro Pistello, Valentina Cappello, Luciana Dente, Chiara Gabellini, Piotr Barski, Vittoria Raffa

Abstract:

CRISPR/Cas9 technology has gained the interest of researchers in the field of biotechnology for genome editing. Since its discovery as a microbial adaptive immune defense, this system has been widely adopted and is acknowledged for having a variety of applications. However, critical barriers related to safety and delivery are persisting. Here, we propose a new concept of genome engineering, which is based on a nano-formulation of Cas9. The Cas9 enzyme was conjugated to a gold nanoparticle (AuNP-Cas9). The AuNP-Cas9 maintained its cleavage efficiency in vitro, to the same extent as the ribonucleoprotein, including non-conjugated Cas9 enzyme, and showed high gene editing efficiency in vivo in zebrafish embryos. Since CRISPR/Cas9 technology is extensively used in cancer research, melanoma was selected as a validation target. Cell studies were performed in A375 human melanoma cells. Particles per se had no impact on cell metabolism and proliferation. Intriguingly, the AuNP-Cas9 internalized spontaneously in cells and localized as a single particle in the cytoplasm and organelles. More importantly, the AuNP-Cas9 showed a high nuclear localization signal. The AuNP-Cas9, overcoming the delivery difficulties of Cas9, could be used in cellular biology and localization studies. Taking advantage of the plasmonic properties of gold nanoparticles, this technology could potentially be a bio-tool for combining gene editing and photothermal therapy in cancer cells. Further work will be focused on intracellular interactions of the nano-formulation and characterization of the optical properties.

Keywords: CRISPR/Cas9, gene editing, gold nanoparticles, nanotechnology

Procedia PDF Downloads 101
418 A Tool Tuning Approximation Method: Exploration of the System Dynamics and Its Impact on Milling Stability When Amending Tool Stickout

Authors: Nikolai Bertelsen, Robert A. Alphinas, Klaus B. Orskov

Abstract:

The shortest possible tool stickout has been the traditional go-to approach with expectations of increased stability and productivity. However, experimental studies at Danish Advanced Manufacturing Research Center (DAMRC) have proven that for some tool stickout lengths, there exist local productivity optimums when utilizing the Stability Lobe Diagrams for chatter avoidance. This contradicts with traditional logic and the best practices taught to machinists. This paper explores the vibrational characteristics and behaviour of a milling system over the tool stickout length. The experimental investigation has been conducted by tap testing multiple endmills where the tool stickout length has been varied. For each length, the modal parameters have been recorded and mapped to visualize behavioural tendencies. Furthermore, the paper explores the correlation between the modal parameters and the Stability Lobe Diagram to outline the influence and importance of each parameter in a multi-mode system. The insights are conceptualized into a tool tuning approximation solution. It builds on an almost linear change in the natural frequencies when amending tool stickout, which results in changed positions of the Chatter-free Stability Lobes. Furthermore, if the natural frequency of two modes become too close, it will onset of the dynamic absorber effect phenomenon. This phenomenon increases the critical stable depth of cut, allowing for a more stable milling process. Validation tests on the tool tuning approximation solution have shown varying success of the solution. This outlines the need for further research on the boundary conditions of the solution to understand at which conditions the tool tuning approximation solution is applicable. If the conditions get defined, the conceptualized tool tuning approximation solution outlines an approach for quick and roughly approximating tool stickouts with the potential for increased stiffness and optimized productivity.

Keywords: milling, modal parameters, stability lobes, tap testing, tool tuning

Procedia PDF Downloads 157
417 A Low-Cost of Foot Plantar Shoes for Gait Analysis

Authors: Zulkifli Ahmad, Mohd Razlan Azizan, Nasrul Hadi Johari

Abstract:

This paper presents a study on development and conducting of a wearable sensor system for gait analysis measurement. For validation, the method of plantar surface measurement by force plate was prepared. In general gait analysis, force plate generally represents a studies about barefoot in whole steps and do not allow analysis of repeating movement step in normal walking and running. The measurements that were usually perform do not represent the whole daily plantar pressures in the shoe insole and only obtain the ground reaction force. The force plate measurement is usually limited a few step and it is done indoor and obtaining coupling information from both feet during walking is not easily obtained. Nowadays, in order to measure pressure for a large number of steps and obtain pressure in each insole part, it could be done by placing sensors within an insole. With this method, it will provide a method for determine the plantar pressures while standing, walking or running of a shoe wearing subject. Inserting pressure sensors in the insole will provide specific information and therefore the point of the sensor placement will result in obtaining the critical part under the insole. In the wearable shoe sensor project, the device consists left and right shoe insole with ten FSR. Arduino Mega was used as a micro-controller that read the analog input from FSR. The analog inputs were transmitted via bluetooth data transmission that gains the force data in real time on smartphone. Blueterm software which is an android application was used as an interface to read the FSR reading on the shoe wearing subject. The subject consist of two healthy men with different age and weight doing test while standing, walking (1.5 m/s), jogging (5 m/s) and running (9 m/s) on treadmill. The data obtain will be saved on the android device and for making an analysis and comparison graph.

Keywords: gait analysis, plantar pressure, force plate, earable sensor

Procedia PDF Downloads 454