Search results for: analytical validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3655

Search results for: analytical validation

475 Two-Dimensional Dynamics Motion Simulations of F1 Rare Wing-Flap

Authors: Chaitanya H. Acharya, Pavan Kumar P., Gopalakrishna Narayana

Abstract:

In the realm of aerodynamics, numerous vehicles incorporate moving components to enhance their performance. For instance, airliners deploy hydraulically operated flaps and ailerons during take-off and landing, while Formula 1 racing cars utilize hydraulic tubes and actuators for various components, including the Drag Reduction System (DRS). The DRS, consisting of a rear wing and adjustable flaps, plays a crucial role in overtaking manoeuvres. The DRS has two positions: the default position with the flaps down, providing high downforce, and the lifted position, which reduces drag, allowing for increased speed and aiding in overtaking. Swift deployment of the DRS during races is essential for overtaking competitors. The fluid flow over the rear wing flap becomes intricate during deployment, involving flow reversal and operational changes, leading to unsteady flow physics that significantly influence aerodynamic characteristics. Understanding the drag and downforce during DRS deployment is crucial for determining race outcomes. While experiments can yield accurate aerodynamic data, they can be expensive and challenging to conduct across varying speeds. Computational Fluid Dynamics (CFD) emerges as a cost-effective solution to predict drag and downforce across a range of speeds, especially with the rapid deployment of the DRS. This study employs the finite volume-based solver Ansys Fluent, incorporating dynamic mesh motions and a turbulent model to capture the complex flow phenomena associated with the moving rear wing flap. A dedicated section for the rare wing-flap is considered in the present simulations, and the aerodynamics of these sections closely resemble S1223 aerofoils. Before delving into the simulations of the rare wing-flap aerofoil, numerical results undergo validation using experimental data from an NLR flap aerofoil case, encompassing different flap angles at two distinct angles of attack was carried out. The increase in flap angle as increase in lift and drag is observed for a given angle of attack. The simulation methodology for the rare-wing-flap aerofoil case involves specific time durations before lifting the flap. During this period, drag and downforce values are determined as 330 N and 1800N, respectively. Following the flap lift, a noteworthy reduction in drag to 55 % and a decrease in downforce to 17 % are observed. This understanding is critical for making instantaneous decisions regarding the deployment of the Drag Reduction System (DRS) at specific speeds, thereby influencing the overall performance of the Formula 1 racing car. Hence, this work emphasizes the utilization of dynamic mesh motion methodology to predict the aerodynamic characteristics during the deployment of the DRS in a Formula 1 racing car.

Keywords: DRS, CFD, drag, downforce, dynamics mesh motion

Procedia PDF Downloads 94
474 Building Student Empowerment through Live Commercial Projects: A Reflective Account of Participants

Authors: Nilanthi Ratnayake, Wen-Ling Liu

Abstract:

Prior research indicates an increasing gap between the skills and capabilities of graduates in the contemporary workplace across the globe. The challenge of addressing this issue primarily lies on the hands of higher education institutes/universities. In particular, surveys of UK employers and retailers found that soft skills including communication, numeracy, teamwork, confidence, analytical ability, digital/IT skills, business sense, language, and social skills are highly valued by graduate employers, and in achieving this, there are various assessed and non-assessed learning exercises have already been embedded into the university curriculum. To this end, this research study aims to explore the reflections of postgraduate student participation in a live commercial project (i.e. designing an advertising campaign for open days, summer school etc.) implemented with the intention of offering a transformative experience by deploying this project. Qualitative research methodology has been followed in this study, collecting data from three types of target audiences; students, academics and employers via a series of personal interviews and focus group discussions. Recorded data were transcribed, entered into NVIVO, and analysed using meaning condensation and content analysis. Students reported that they had a very positive impact towards improving self-efficacy, especially in relation to soft skills and confidence in seeking employment opportunities. In addition, this project has reduced cultural barriers for international students in general communications. Academic staff and potential employers who attended on the presentation day expressed their gratitude for offering a lifelong experience for students, and indeed believed that these type of projects contribute significantly to enhance skills and capabilities of students to cater the demands of employers. In essence, key findings demonstrate that an integration of knowledge-based skills into a live commercial project facilitate individuals to make the transition from education to employment in terms of skills, abilities and work behaviours more effectively in comparison to some other activities/assuagements that are currently in place in higher education institutions/universities.

Keywords: soft skills, commercially live project, higher education, student participation

Procedia PDF Downloads 360
473 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis

Authors: Mehrnaz Mostafavi

Abstract:

The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.

Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans

Procedia PDF Downloads 101
472 Improving Collective Health and Social Care through a Better Consideration of Sex and Gender: Analytical Report by the French National Authority for Health

Authors: Thomas Suarez, Anne-Sophie Grenouilleau, Erwan Autin, Alexandre Biosse-Duplan, Emmanuelle Blondet, Laurence Chazalette, Marie Coniel, Agnes Dessaigne, Sylvie Lascols, Andrea Lasserre, Candice Legris, Pierre Liot, Aline Metais, Karine Petitprez, Christophe Varlet, Christian Saout

Abstract:

Background: The role of biological sex and gender identity -whether assigned or chosen- as health determinants are far from a recent discovery: several reports have stressed out how being a woman or a man could affect health on various scales. However, taking it into consideration beyond stereotypes and rigid binary assumptions still seems to be a work in progress. Method: The report is a synthesis on a variety of specific topics, each of which was studied by a specialist from the French National Authority for Health (HAS), through an analysis of existing literature on both healthcare policy construction process and instruments (norms, data analysis, clinical trials, guidelines, and professional practices). This work also implied a policy analysis of French recent public health laws and a retrospective study of guidelines with a gender mainstreaming approach. Results: The analysis showed that though sex and gender were well-known determinants of health, their consideration by both public policy and health operators was often incomplete, as it does not incorporate how sex and gender interact, as well as how they interact with other factors. As a result, the health and social care systems and their professionals tend to reproduce some stereotypical and inadequate habits. Though the data available often allows to take sex and gender into consideration, such data is often underused in practice guidelines and policy formulation. Another consequence is a lack of inclusiveness towards transgender or intersex persons. Conclusions: This report first urges for raising awareness of all the actors of health, in its broadest definition, that sex and gender matter beyond first-look conclusions. It makes a series of recommendations in order to reshape policy construction in the health sector on the one hand and to design public health instruments to make them more inclusive regarding sex and gender on the other hand. The HAS finally committed to integrate sex and gender preoccupations in its workings methods, to be a driving force in the spread of these concerns.

Keywords: biological sex, determinants of health, gender, healthcare policy instruments, social accompaniment

Procedia PDF Downloads 128
471 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 303
470 Place-Based Practice: A New Zealand Rural Nursing Study

Authors: Jean Ross

Abstract:

Rural nursing is not an identified professional identity in the UK, unlike the USA, Canada, and Australia which recognizes rural nursing as a specialty scope of practice. In New Zealand rural nursing is an underrepresented aspect of nursing practice, is misunderstood and does not fit easily within the wider nursing profession and policies governing practice. This study situated within the New Zealand context adds to the international studies’ aligned with rural nursing practice. The study addresses a gap in the literature by striving to identify and strengthen the awareness of and increase rural nurses’ understanding and articulation of their changing and adapting identity and furthermore an opportunity to appreciate their contribution to the delivery of rural health care. In addition, this study adds to the growing global rural nursing knowledge and theoretical base. This research is a continuation of the author’s academic involvement and ongoing relationships with the rural nursing sector, national policy analysts and health care planners since the 1990s. These relationships have led to awareness, that despite rural nurses’ efforts to explain the particular nuances which make up their practice, there has been little recognition by profession to establish rural nursing as a specialty. The research explored why nurses’ who practiced in the rural Otago region of New Zealand, between the 1990s and early 2000s moved away from the traditional identity as a district, practice or public health nurse and looked towards a more appropriate identity which reflected their emerging practice. This qualitative research situated within the interpretive paradigm embeds this retrospective study within the discipline of nursing and engages with the concepts of place and governmentality. National key informant and Otago regional rural nurse interviews generated data and were analyzed using thematic analysis. Stemming from the analyses, an analytical diagrammatic matrix was developed demonstrating rural nursing as a ‘place–based practice’ governed both from within and beyond location presenting how the nurse aligns the self in the rural community as a meaningful provider of health care. Promoting this matrix may encourage a focal discussion point within the international spectrum of nursing and likewise between rural and non-rural nurses which it is hoped will generate further debate in relation to the different nuances aligned with rural nursing practice. Further, insights from this paper may capture key aspects and issues related to identity formation in respect to rural nurses, from the UK, New Zealand, Canada, USA, and Australia.

Keywords: matrix, place, nursing, rural

Procedia PDF Downloads 140
469 Contribution of Word Decoding and Reading Fluency on Reading Comprehension in Young Typical Readers of Kannada Language

Authors: Vangmayee V. Subban, Suzan Deelan. Pinto, Somashekara Haralakatta Shivananjappa, Shwetha Prabhu, Jayashree S. Bhat

Abstract:

Introduction and Need: During early years of schooling, the instruction in the schools mainly focus on children’s word decoding abilities. However, the skilled readers should master all the components of reading such as word decoding, reading fluency and comprehension. Nevertheless, the relationship between each component during the process of learning to read is less clear. The studies conducted in alphabetical languages have mixed opinion on relative contribution of word decoding and reading fluency on reading comprehension. However, the scenarios in alphasyllabary languages are unexplored. Aim and Objectives: The aim of the study was to explore the role of word decoding, reading fluency on reading comprehension abilities in children learning to read Kannada between the age ranges of 5.6 to 8.6 years. Method: In this cross sectional study, a total of 60 typically developing children, 20 each from Grade I, Grade II, Grade III maintaining equal gender ratio between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. The reading fluency and reading comprehension abilities of the children were assessed using Grade level passages selected from the Kannada text book of children core curriculum. All the passages consist of five questions to assess reading comprehension. The pseudoword decoding skills were assessed using 40 pseudowords with varying syllable length and their Akshara composition. Pseudowords are formed by interchanging the syllables within the meaningful word while maintaining the phonotactic constraints of Kannada language. The assessment material was subjected to content validation and reliability measures before collecting the data on the study samples. The data were collected individually, and reading fluency was assessed for words correctly read per minute. Pseudoword decoding was scored for the accuracy of reading. Results: The descriptive statistics indicated that the mean pseudoword reading, reading comprehension, words accurately read per minute increased with the Grades. The performance of Grade III children found to be higher, Grade I lower and Grade II remained intermediate of Grade III and Grade I. The trend indicated that reading skills gradually improve with the Grades. Pearson’s correlation co-efficient showed moderate and highly significant (p=0.00) positive co-relation between the variables, indicating the interdependency of all the three components required for reading. The hierarchical regression analysis revealed 37% variance in reading comprehension was explained by pseudoword decoding and was highly significant. Subsequent entry of reading fluency measure, there was no significant change in R-square and was only change 3%. Therefore, pseudoword-decoding evolved as a single most significant predictor of reading comprehension during early Grades of reading acquisition. Conclusion: The present study concludes that the pseudoword decoding skills contribute significantly to reading comprehension than reading fluency during initial years of schooling in children learning to read Kannada language.

Keywords: alphasyllabary, pseudo-word decoding, reading comprehension, reading fluency

Procedia PDF Downloads 262
468 Mechanical, Thermal and Biodegradable Properties of Bioplast-Spruce Green Wood Polymer Composites

Authors: A. Atli, K. Candelier, J. Alteyrac

Abstract:

Environmental and sustainability concerns push the industries to manufacture alternative materials having less environmental impact. The Wood Plastic Composites (WPCs) produced by blending the biopolymers and natural fillers permit not only to tailor the desired properties of materials but also are the solution to meet the environmental and sustainability requirements. This work presents the elaboration and characterization of the fully green WPCs prepared by blending a biopolymer, BIOPLAST® GS 2189 and spruce sawdust used as filler with different amounts. Since both components are bio-based, the resulting material is entirely environmentally friendly. The mechanical, thermal, structural properties of these WPCs were characterized by different analytical methods like tensile, flexural and impact tests, Thermogravimetric Analysis (TGA), Differential Scanning Calorimetry (DSC) and X-ray Diffraction (XRD). Their water absorption properties and resistance to the termite and fungal attacks were determined in relation with different wood filler content. The tensile and flexural moduli of WPCs increased with increasing amount of wood fillers into the biopolymer, but WPCs became more brittle compared to the neat polymer. Incorporation of spruce sawdust modified the thermal properties of polymer: The degradation, cold crystallization, and melting temperatures shifted to higher temperatures when spruce sawdust was added into polymer. The termite, fungal and water absorption resistance of WPCs decreased with increasing wood amount in WPCs, but remained in durability class 1 (durable) concerning fungal resistance and quoted 1 (attempted attack) in visual rating regarding to the termites resistance except that the WPC with the highest wood content (30 wt%) rated 2 (slight attack) indicating a long term durability. All the results showed the possibility to elaborate the easy injectable composite materials with adjustable properties by incorporation of BIOPLAST® GS 2189 and spruce sawdust. Therefore, lightweight WPCs allow both to recycle wood industry byproducts and to produce a full ecologic material.

Keywords: biodegradability, color measurements, durability, mechanical properties, melt flow index, MFI, structural properties, thermal properties, wood-plastic composites, WPCs

Procedia PDF Downloads 137
467 Rapid Discrimination of Porcine and Tilapia Fish Gelatin by Fourier Transform Infrared- Attenuated Total Reflection Combined with 2 Dimensional Infrared Correlation Analysis

Authors: Norhidayu Muhamad Zain

Abstract:

Gelatin, a purified protein derived mostly from porcine and bovine sources, is used widely in food manufacturing, pharmaceutical, and cosmetic industries. However, the presence of any porcine-related products are strictly forbidden for Muslim and Jewish consumption. Therefore, analytical methods offering reliable results to differentiate the sources of gelatin are needed. The aim of this study was to differentiate the sources of gelatin (porcine and tilapia fish) using Fourier transform infrared- attenuated total reflection (FTIR-ATR) combined with two dimensional infrared (2DIR) correlation analysis. Porcine gelatin (PG) and tilapia fish gelatin (FG) samples were diluted in distilled water at concentrations ranged from 4-20% (w/v). The samples were then analysed using FTIR-ATR and 2DIR correlation software. The results showed a significant difference in the pattern map of synchronous spectra at the region of 1000 cm⁻¹ to 1100 cm⁻¹ between PG and FG samples. The auto peak at 1080 cm⁻¹ that attributed to C-O functional group was observed at high intensity in PG samples compared to FG samples. Meanwhile, two auto peaks (1080 cm⁻¹ and 1030 cm⁻¹) at lower intensity were identified in FG samples. In addition, using 2D correlation analysis, the original broad water OH bands in 1D IR spectra can be effectively differentiated into six auto peaks located at 3630, 3340, 3230, 3065, 2950 and 2885 cm⁻¹ for PG samples and five auto peaks at 3630, 3330, 3230, 3060 and 2940 cm⁻¹ for FG samples. Based on the rule proposed by Noda, the sequence of the spectral changes in PG samples is as following: NH₃⁺ amino acid > CH₂ and CH₃ aliphatic > OH stretch > carboxylic acid OH stretch > NH in secondary amide > NH in primary amide. In contrast, the sequence was totally in the opposite direction for FG samples and thus both samples provide different 2D correlation spectra ranged from 2800 cm-1 to 3700 cm⁻¹. This method may provide a rapid determination of gelatin source for application in food, pharmaceutical, and cosmetic products.

Keywords: 2 dimensional infrared (2DIR) correlation analysis, Fourier transform infrared- attenuated total reflection (FTIR-ATR), porcine gelatin, tilapia fish gelatin

Procedia PDF Downloads 250
466 SUSTAINEXT–Validating a Zero-Waste: Dynamic, Multivalorization Route Biorefinery for Plant Extracts

Authors: Adriana Diaz Triana, Wolfgang Wimmer, Sebastian Glaser, Rainer Pamminger

Abstract:

SUSTAINEXT is a pioneer initiative in Extremadura, Spain under the EU Biobased industries. SUSTANEXT will scale-up and validate an industrial facility to produce botanical extracts, based on three key pillars. First, the whole valorization of bio-based feedstocks with a zero-waste and zero-emissions ambition. SUSTAINEXT will be deployed with six feedstocks. Three medicinal and aromatic plants (Rosemary, Chamomile, and Lemon verbena) will be locally sourced from disused tobacco fields with installed agri-voltaics; and three underexploited agro-industrial side streams will be further valorized (Olive, artichoke-cardoon, and pomegranate). Second, a dynamic, analytical biorefinery (DYANA) will isolate polyphenol and tri-terpenes from feedstocks in a disruptive and circular way. SUSTAINEXT explores 12 valorization routes (VRs) to extract and purify 46 functional ingredients, of which 13 are new in the market and 12 are newly produced in Europe. Third, the integrated and versatile value chain engages all actors, from feedstocks suppliers to extract users in the industries of food, animal feed, nutraceuticals, cosmetics, chemical performance, soil enhancers and fertilizers. This paper addresses SUTAINEXT activities towards zero impacts and full regulatory compliance. A comprehensive Life Cycle Thinking approach is proposed, with four complementary assessments running iteratively along the project duration (4,5 years). These are the Life Cycle Cost (LCCA), Life Cycle (LCA), Social Life Cycle (S-LCA) and Circularity (CA) assessments. The LCA will help evaluate the feedstock suitability parameters and intrinsic characteristics that quantify the feedstock´s grade for a determined use, and the feedstock´s suitability index for a specific VR. The LCA will also study the emissions, land use change, energy generation and consumption, and other environmental aspects and impacts of the VRs, to identify the most resource efficient and less impactful distribution of products from the circular biorefinery model used in SUSTAINEXT. Challenges to complete the LCA include the definition of the system boundaries, carrying out a robust inventory, and the proper allocation of impacts to the different VRs.

Keywords: biorefinery, botanical extracts, life cycle assessment, valorization routes.

Procedia PDF Downloads 22
465 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction

Authors: Ahlad Neti, Elisa Arch, Martha Hall

Abstract:

Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.

Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices

Procedia PDF Downloads 156
464 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 90
463 Exo-III Assisted Amplification Strategy through Target Recycling of Hg²⁺ Detection in Water: A GNP Based Label-Free Colorimetry Employing T-Rich Hairpin-Loop Metallobase

Authors: Abdul Ghaffar Memon, Xiao Hong Zhou, Yunpeng Xing, Ruoyu Wang, Miao He

Abstract:

Due to deleterious environmental and health effects of the Hg²⁺ ions, various online, detection methods apart from the traditional analytical tools have been developed by researchers. Biosensors especially, label, label-free, colorimetric and optical sensors have advanced with sensitive detection. However, there remains a gap of ultrasensitive quantification as noise interact significantly especially in the AuNP based label-free colorimetry. This study reported an amplification strategy using Exo-III enzyme for target recycling of Hg²⁺ ions in a T-rich hairpin loop metallobase label-free colorimetric nanosensor with an improved sensitivity using unmodified gold nanoparticles (uGNPs) as an indicator. The two T-rich metallobase hairpin loop structures as 5’- CTT TCA TAC ATA GAA AAT GTA TGT TTG -3 (HgS1), and 5’- GGC TTT GAG CGC TAA GAA A TA GCG CTC TTT G -3’ (HgS2) were tested in the study. The thermodynamic properties of HgS1 and HgS2 were calculated using online tools (http://biophysics.idtdna.com/cgi-bin/meltCalculator.cgi). The lab scale synthesized uGNPs were utilized in the analysis. The DNA sequence had T-rich bases on both tails end, which in the presence of Hg²⁺ forms a T-Hg²⁺-T mismatch, promoting the formation of dsDNA. Later, the Exo-III incubation enable the enzyme to cleave stepwise mononucleotides from the 3’ end until the structure become single-stranded. These ssDNA fragments then adsorb on the surface of AuNPs in their presence and protect AuNPs from the induced salt aggregation. The visible change in color from blue (aggregation stage in the absence of Hg²⁺) and pink (dispersion state in the presence of Hg²⁺ and adsorption of ssDNA fragments) can be observed and analyzed through UV spectrometry. An ultrasensitive quantitative nanosensor employing Exo-III assisted target recycling of mercury ions through label-free colorimetry with nanomolar detection using uGNPs have been achieved and is further under the optimization to achieve picomolar range by avoiding the influence of the environmental matrix. The proposed strategy will supplement in the direction of uGNP based ultrasensitive, rapid, onsite, label-free colorimetric detection.

Keywords: colorimetric, Exo-III, gold nanoparticles, Hg²⁺ detection, label-free, signal amplification

Procedia PDF Downloads 311
462 Assessing the Prevalence of Taste Loss Among Adults Who Have Contracted SARS-CoV-2

Authors: Alketa Qafmolla, Mimoza Canga, Edit Xhajanka, Vergjini Mulo, Ramazan Isufi, Vito Antonio Malagnino

Abstract:

COVID-19 is threatening the lives of people all over the world. A number of health problems, including oral health problems, have been linked to SARS-CoV-2 infection. Loss of taste is one of the initial symptoms presented by patients who have COVID-19. Purpose: The aim of the current study is to determine the prevalence of taste loss in young adults aged 18 to 26 who have contracted SARS-CoV-2. Materials and methods: This study is analytical cross-sectional research conducted in Albania from March 2023 to September 2023. Our research included a total of 157 students, of which 100 (63.7%) were female and 57 (36.3%) were male. They were divided into three age groups: 18-20, 21-23, and 24-26 years old. Students willingly agreed to participate in the current study and were assured that their participation would be kept anonymous. The study recorded no dropouts and was conducted in accordance with the Declaration of Helsinki. Statistical analysis was performed using IBM SPSS Statistics Version 23.0 on Microsoft Windows Linux, Chicago, IL, USA. The evaluation of data was done using analysis of variance (ANOVA), with a significance level set at P ≤ 0.05. Results: 113 (72%) of the participants reported loss of taste, while 44 (28%) did not experience any loss of taste. According to the study's data analysis, taste problems typically manifest over three days, with the lowest frequency occurring on the second day and the highest frequency occurring on the fifteenth. 68.7% of participants reported experiencing taste recovery after three weeks. The present study's findings demonstrated a substantial correlation between the duration of the individuals' COVID-19 infection and taste loss (P <0.0003). Based on the statistical analysis of the data, this study shows that there is no association between gender and loss of taste (P = 0.218). The participants reported having undergone the following treatments: prednisolone sodium phosphate (15 mg/5 mL daily), vitamin C (1000 mg), azithromycin (500 mg daily), oral vitamin D3 supplementation of 5000 IU daily, vitamin B12 (2.4 mcg daily), zinc 20 mg daily, Augmentin tablets (625 mg), and magnesium sulfate (4 g/100 mL). Conclusion: Within the limitations of this study conducted in Albania, it can be concluded that loss of taste was present in 72% of participants infected with COVID-19 and recovery was evident after three weeks.

Keywords: adult, Albania, COVID-19, cross-sectional study, loss of taste

Procedia PDF Downloads 26
461 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 70
460 Renewable Energy and Hydrogen On-Site Generation for Drip Irrigation and Agricultural Machinery

Authors: Javier Carroquino, Nieves García-Casarejos, Pilar Gargallo, F. Javier García-Ramos

Abstract:

The energy used in agriculture is a source of global emissions of greenhouse gases. The two main types of this energy are electricity for pumping and diesel for agricultural machinery. In order to reduce these emissions, the European project LIFE REWIND addresses the supply of this demand from renewable sources. First of all, comprehensive data on energy demand and available renewable resources have been obtained in several case studies. Secondly, a set of simulations and optimizations have been performed, in search of the best configuration and sizing, both from an economic and emission reduction point of view. For this purpose, it was used software based on genetic algorithms. Thirdly, a prototype has been designed and installed, that it is being used for the validation in a real case. Finally, throughout a year of operation, various technical and economic parameters are being measured for further analysis. The prototype is not connected to the utility grid, avoiding the cost and environmental impact of a grid extension. The system includes three kinds of photovoltaic fields. One is located on a fixed structure on the terrain. Another one is floating on an irrigation raft. The last one is mounted on a two axis solar tracker. Each has its own solar inverter. The total amount of nominal power is 44 kW. A lead acid battery with 120 kWh of capacity carries out the energy storage. Three isolated inverters support a three phase, 400 V 50 Hz micro-grid, the same characteristics of the utility grid. An advanced control subsystem has been constructed, using free hardware and software. The electricity produced feeds a set of seven pumps used for purification, elevation and pressurization of water in a drip irrigation system located in a vineyard. Since the irrigation season does not include the whole year, as well as a small oversize of the generator, there is an amount of surplus energy. With this surplus, a hydrolyser produces on site hydrogen by electrolysis of water. An off-road vehicle with fuel cell feeds on that hydrogen and carries people in the vineyard. The only emission of the process is high purity water. On the one hand, the results show the technical and economic feasibility of stand-alone renewable energy systems to feed seasonal pumping. In this way, the economic costs, the environmental impacts and the landscape impacts of grid extensions are avoided. The use of diesel gensets and their associated emissions are also avoided. On the other hand, it is shown that it is possible to replace diesel in agricultural machinery, substituting it for electricity or hydrogen of 100% renewable origin and produced on the farm itself, without any external energy input. In addition, it is expected to obtain positive effects on the rural economy and employment, which will be quantified through interviews.

Keywords: drip irrigation, greenhouse gases, hydrogen, renewable energy, vineyard

Procedia PDF Downloads 343
459 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 164
458 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 147
457 Application of Unstructured Mesh Modeling in Evolving SGE of an Airport at the Confluence of Multiple Rivers in a Macro Tidal Region

Authors: A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Among the various developing countries in the world like China, Malaysia, Korea etc., India is also developing its infrastructures in the form of Road/Rail/Airports and Waterborne facilities at an exponential rate. Mumbai, the financial epicenter of India is overcrowded and to relieve the pressure of congestion, Navi Mumbai suburb is being developed on the east bank of Thane creek near Mumbai. The government due to limited space at existing Mumbai Airports (domestic and international) to cater for the future demand of airborne traffic, proposes to build a new international airport near Panvel at Navi Mumbai. Considering the precedence of extreme rainfall on 26th July 2005 and nearby townships being in a low-lying area, wherein new airport is proposed, it is inevitable to study this complex confluence area from a hydrodynamic consideration under both tidal and extreme events (predicted discharge hydrographs), to avoid inundation of the surrounding due to the proposed airport reclamation (1160 hectares) and to determine the safe grade elevation (SGE). The model studies conducted using the application of unstructured mesh to simulate the Panvel estuarine area (93 km2), calibration, validation of a model for hydraulic field measurements and determine the maxima water levels around the airport for various extreme hydrodynamic events, namely the simultaneous occurrence of highest tide from the Arabian Sea and peak flood discharges (Probable Maximum Precipitation and 26th July 2005) from five rivers, the Gadhi, Kalundri, Taloja, Kasadi and Ulwe, meeting at the proposed airport area revealed that: (a) The Ulwe River flowing beneath the proposed airport needs to be diverted. The 120m wide proposed Ulwe diversion channel having a wider base width of 200 m at SH-54 Bridge on the Ulwe River along with the removal of the existing bund in Moha Creek is inevitable to keep the SGE of the airport to a minimum. (b) The clear waterway of 80 m at SH-54 Bridge (Ulwe River) and 120 m at Amra Marg Bridge near Moha Creek is also essential for the Ulwe diversion and (c) The river bank protection works on the right bank of Gadhi River between the NH-4B and SH-54 bridges as well as upstream of the Ulwe River diversion channel are essential to avoid inundation of low lying areas. The maxima water levels predicted around the airport keeps SGE to a minimum of 11m with respect to Chart datum of Ulwe Bundar and thus development is not only technologically-economically feasible but also sustainable. The unstructured mesh modeling is a promising tool to simulate complex extreme hydrodynamic events and provides a reliable solution to evolve optimal SGE of airport.

Keywords: airport, hydrodynamics, safe grade elevation, tides

Procedia PDF Downloads 261
456 Caffeic Acid in Cosmetic Formulations: An Innovative Assessment

Authors: Caroline M. Spagnol, Vera L. B. Isaac, Marcos A. Corrêa, Hérida R. N. Salgado

Abstract:

Phenolic compounds are abundant in the Brazilian plant kingdom and they are part of a large and complex group of organic substances. Cinnamic acids are part of this group of organic compounds, and caffeic acid (CA) is one of its representatives. Antioxidants are compounds which act as free radical scavengers and, in other cases, such as metal chelators, both in the initiation stage and the propagation of oxidative process. The tyrosinase, polyphenol oxidase, is an enzyme that acts at various stages of melanin biosynthesis within the melanocytes and is considered a key molecule in this process. Some phenolic compounds exhibit inhibitory effects on melanogenesis by inhibiting the tyrosinase enzymatic activity and therefore has been the subject of studies. However, few studies have reported the effectiveness of these products and their safety. Objectives: To assess the inhibitory activity of tyrosinase, the antioxidant activity of CA and its cytotoxic potential. The method to evaluate the inhibitory activity of tyrosinase aims to assess the reduction transformation of L-dopa into dopaquinone reactions catalyzed by the enzyme. For evaluating the antioxidant activity was used the analytical methodology of DPPH radical inhibition. The cytotoxicity evaluation was carried out using the MTT method (3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide), a colorimetric assay which determines the amount of insoluble violet crystals formed by the reduction of MTT in the mitochondria of living cells. Based on the results obtained during the study, CA has low activity as a depigmenting agent. However, it is a more potent antioxidant than ascorbic acid (AA), since a lower amount of CA is sufficient to inhibit 50% of DPPH radical. The results are promising since CA concentration that promoted 50% toxicity in HepG2 cells (IC50=781.8 μg/mL) is approximately 330 to 400 times greater than the concentration required to inhibit 50% of DPPH (IC50 DPPH= 2.39 μg/mL) and ABTS (IC50 ABTS= 1.96 μg/mL) radicals scavenging activity, respectively. The maximum concentration of caffeic acid tested (1140 mg /mL) did not reach 50% of cell death in HaCat cells. Thus, it was concluded that the caffeic acid does not cause toxicity in HepG2 and HaCat cells in the concentrations required to promote antioxidant activity in vitro, and it can be applied in topical products.

Keywords: caffeic acid, antioxidant, cytotoxicity, cosmetic

Procedia PDF Downloads 379
455 Understanding Complexity at Pre-Construction Stage in Project Planning of Construction Projects

Authors: Mehran Barani Shikhrobat, Roger Flanagan

Abstract:

The construction planning and scheduling based on using the current tools and techniques is resulted deterministic in nature (Gantt chart, CPM) or applying a very little probability of completion (PERT) for each task. However, every project embodies assumptions and influences and should start with a complete set of clearly defined goals and constraints that remain constant throughout the duration of the project. Construction planners continue to apply the traditional methods and tools of “hard” project management that were developed for “ideal projects,” neglecting the potential influence of complexity on the design and construction process. The aim of this research is to investigate the emergence and growth of complexity in project planning and to provide a model to consider the influence of complexity on the total project duration at the post-contract award pre-construction stage of a project. The literature review showed that complexity originates from different sources of environment, technical, and workflow interactions. They can be divided into two categories of complexity factors, first, project tasks, and second, project organisation management. Project tasks may originate from performance, lack of resources, or environmental changes for a specific task. Complexity factors that relate to organisation and management refer to workflow and interdependence of different parts. The literature review highlighted the ineffectiveness of traditional tools and techniques in planning for complexity. However, this research focus on understanding the fundamental causes of the complexity of construction projects were investigated through a questionnaire with industry experts. The results were used to develop a model that considers the core complexity factors and their interactions. System dynamics were used to investigate the model to consider the influence of complexity on project planning. Feedback from experts revealed 20 major complexity factors that impact project planning. The factors are divided into five categories known as core complexity factors. To understand the weight of each factor in comparison, the Analytical Hierarchy Process (AHP) analysis method is used. The comparison showed that externalities are ranked as the biggest influence across the complexity factors. The research underlines that there are many internal and external factors that impact project activities and the project overall. This research shows the importance of considering the influence of complexity on the project master plan undertaken at the post-contract award pre-construction phase of a project.

Keywords: project planning, project complexity measurement, planning uncertainty management, project risk management, strategic project scheduling

Procedia PDF Downloads 138
454 An Evidence-Based Laboratory Medicine (EBLM) Test to Help Doctors in the Assessment of the Pancreatic Endocrine Function

Authors: Sergio J. Calleja, Adria Roca, José D. Santotoribio

Abstract:

Pancreatic endocrine diseases include pathologies like insulin resistance (IR), prediabetes, and type 2 diabetes mellitus (DM2). Some of them are highly prevalent in the U.S.—40% of U.S. adults have IR, 38% of U.S. adults have prediabetes, and 12% of U.S. adults have DM2—, as reported by the National Center for Biotechnology Information (NCBI). Building upon this imperative, the objective of the present study was to develop a non-invasive test for the assessment of the patient’s pancreatic endocrine function and to evaluate its accuracy in detecting various pancreatic endocrine diseases, such as IR, prediabetes, and DM2. This approach to a routine blood and urine test is based around serum and urine biomarkers. It is made by the combination of several independent public algorithms, such as the Adult Treatment Panel III (ATP-III), triglycerides and glucose (TyG) index, homeostasis model assessment-insulin resistance (HOMA-IR), HOMA-2, and the quantitative insulin-sensitivity check index (QUICKI). Additionally, it incorporates essential measurements such as the creatinine clearance, estimated glomerular filtration rate (eGFR), urine albumin-to-creatinine ratio (ACR), and urinalysis, which are helpful to achieve a full image of the patient’s pancreatic endocrine disease. To evaluate the estimated accuracy of this test, an iterative process was performed by a machine learning (ML) algorithm, with a training set of 9,391 patients. The sensitivity achieved was 97.98% and the specificity was 99.13%. Consequently, the area under the receiver operating characteristic (AUROC) curve, the positive predictive value (PPV), and the negative predictive value (NPV) were 92.48%, 99.12%, and 98.00%, respectively. The algorithm was validated with a randomized controlled trial (RCT) with a target sample size (n) of 314 patients. However, 50 patients were initially excluded from the study, because they had ongoing clinically diagnosed pathologies, symptoms or signs, so the n dropped to 264 patients. Then, 110 patients were excluded because they didn’t show up at the clinical facility for any of the follow-up visits—this is a critical point to improve for the upcoming RCT, since the cost of each patient is very high and for this RCT almost a third of the patients already tested were lost—, so the new n consisted of 154 patients. After that, 2 patients were excluded, because some of their laboratory parameters and/or clinical information were wrong or incorrect. Thus, a final n of 152 patients was achieved. In this validation set, the results obtained were: 100.00% sensitivity, 100.00% specificity, 100.00% AUROC, 100.00% PPV, and 100.00% NPV. These results suggest that this approach to a routine blood and urine test holds promise in providing timely and accurate diagnoses of pancreatic endocrine diseases, particularly among individuals aged 40 and above. Given the current epidemiological state of these type of diseases, these findings underscore the significance of early detection. Furthermore, they advocate for further exploration, prompting the intention to conduct a clinical trial involving 26,000 participants (from March 2025 to December 2026).

Keywords: algorithm, diabetes, laboratory medicine, non-invasive

Procedia PDF Downloads 33
453 Data Model to Predict Customize Skin Care Product Using Biosensor

Authors: Ashi Gautam, Isha Shukla, Akhil Seghal

Abstract:

Biosensors are analytical devices that use a biological sensing element to detect and measure a specific chemical substance or biomolecule in a sample. These devices are widely used in various fields, including medical diagnostics, environmental monitoring, and food analysis, due to their high specificity, sensitivity, and selectivity. In this research paper, a machine learning model is proposed for predicting the suitability of skin care products based on biosensor readings. The proposed model takes in features extracted from biosensor readings, such as biomarker concentration, skin hydration level, inflammation presence, sensitivity, and free radicals, and outputs the most appropriate skin care product for an individual. This model is trained on a dataset of biosensor readings and corresponding skin care product information. The model's performance is evaluated using several metrics, including accuracy, precision, recall, and F1 score. The aim of this research is to develop a personalised skin care product recommendation system using biosensor data. By leveraging the power of machine learning, the proposed model can accurately predict the most suitable skin care product for an individual based on their biosensor readings. This is particularly useful in the skin care industry, where personalised recommendations can lead to better outcomes for consumers. The developed model is based on supervised learning, which means that it is trained on a labeled dataset of biosensor readings and corresponding skin care product information. The model uses these labeled data to learn patterns and relationships between the biosensor readings and skin care products. Once trained, the model can predict the most suitable skin care product for an individual based on their biosensor readings. The results of this study show that the proposed machine learning model can accurately predict the most appropriate skin care product for an individual based on their biosensor readings. The evaluation metrics used in this study demonstrate the effectiveness of the model in predicting skin care products. This model has significant potential for practical use in the skin care industry for personalised skin care product recommendations. The proposed machine learning model for predicting the suitability of skin care products based on biosensor readings is a promising development in the skin care industry. The model's ability to accurately predict the most appropriate skin care product for an individual based on their biosensor readings can lead to better outcomes for consumers. Further research can be done to improve the model's accuracy and effectiveness.

Keywords: biosensors, data model, machine learning, skin care

Procedia PDF Downloads 97
452 The Multidisciplinary Treatment in Residence Care Clinic for Treatment of Feeding and Eating Disorders

Authors: Yuri Melis, Mattia Resteghini, Emanuela Apicella, Eugenia Dozio, Leonardo Mendolicchio

Abstract:

Aim: This retrospective study was created to analyze the psychometric, anthropometric and body composition values in patients at the beginning and the discharge of their of hospitalization in the residential care clinic for eating and feeding disorders (EFD’s). Method: The sample was composed by (N=59) patients with mean age N= 33,50, divided in subgroups: Anorexia Nervosa (AN) (N=28), Bulimia Nervosa (BN) (N=13) and Binge Eating Disorders (BED) (N=14) recruited from a residential care clinic for eating and feeding disorders. The psychometrics level was measured with self-report questionnaires: Eating Disorders Inventory-3 (EDI-3) The Body Uneasiness Test (BUT), Minnesota Multiphasic Personality Inventory (MMPI – 2). The anthropometric and nutritional values was collected by Body Impedance Assessment (B.I.A), Body mass index (B.M.I.). Measurements were made at the beginning and at the end of hospitalization, with an average time of recovery of about 8,6 months. Results: The all data analysis showed a statistical significance (p-value >0,05 | power size N=0,950) in variation from T0 (start of recovery) to T1 (end of recovery) in the clinical scales of MMPI-2, AN group (Hypocondria T0 64,14 – T1 56,39) (Depression T0 72,93 – T1 59,50) (Hysteria T0 61,29 – T1 56,17) (Psychopathic deviation T0 64,00 – T1 60,82) (Paranoia T0 63,82 – T1 56,14) (Psychasthenia T0 63,82 – T1 57,86) (Schizophrenia T0 64,68 – T1 60,43) (Obsessive T0 60,36 – T1 55,68); BN group (Hypocondria T0 64,08 – T1 47,54) (Depression T0 67,46 – T1 52,46) (Hysteria T0 60,62 – T1 47,84) (Psychopathic deviation T0 65,69 – T1 58,92) (Paranoia T0 67,46 – T1 55,23) (Psychasthenia T0 60,77 – T1 53,77) (Schizophrenia T0 64,68 – T1 60,43) (Obsessive T0 62,92 – T1 54,08); B.E.D groups (Hypocondria T0 59,43 – T1 53,14) (Depression T0 66,71 – T1 54,57) (Hysteria T0 59,86 – T1 53,82) (Psychopathic deviation T0 67,39 – T1 59,03) (Paranoia T0 58,57 – T1 53,21) (Psychasthenia T0 61,43 – T1 53,00) (Schizophrenia T0 62,29 – T1 56,36) (Obsessive T0 58,57 – T1 48,64). EDI-3 report mean value is higher than clinical cut-off at T0, in T1, there is a significant reduction of the general mean of value. The same result is present in the B.U.T. test in the difference between T0 to T1. B.M.I mean value in AN group is (T0 14,83 – T1 18,41) BN group (T0 20 – T1 21,33) BED group (T0 42,32 – T1 34,97) Phase Angle results: AN group (T0 4,78 – T1 5,64) BN (T0 6 – T1 6,53) BED group (T0 6 – T1 6,72). Discussion and conclusion: The evident presence that on the whole sample, we have an altered serious psychiatric and clinic conditions at the beginning of recovery. The interesting conclusions that we can draw from this analysis are that a multidisciplinary approach that includes the entire care of the subject: from the pharmacological treatment, analytical psychotherapy, Psychomotricity, nutritional rehabilitation, and rehabilitative, educational activities. Thus, this Multidisciplinary treatment allows subjects in our sample to be able to restore psychopathological and metabolic values to below the clinical cut-off.

Keywords: feeding and eating disorders, anorexia nervosa, care clinic treatment, multidisciplinary treatment

Procedia PDF Downloads 124
451 Construction of a Dynamic Migration Model of Extracellular Fluid in Brain for Future Integrated Control of Brain State

Authors: Tomohiko Utsuki, Kyoka Sato

Abstract:

In emergency medicine, it is recognized that brain resuscitation is very important for the reduction of mortality rate and neurological sequelae. Especially, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) are most required for stabilizing brain’s physiological state in the treatment for such as brain injury, stroke, and encephalopathy. However, the manual control of BT, ICP, and CBF frequently requires the decision and operation of medical staff, relevant to medication and the setting of therapeutic apparatus. Thus, the integration and the automation of the control of those is very effective for not only improving therapeutic effect but also reducing staff burden and medical cost. For realizing such integration and automation, a mathematical model of brain physiological state is necessary as the controlled object in simulations, because the performance test of a prototype of the control system using patients is not ethically allowed. A model of cerebral blood circulation has already been constructed, which is the most basic part of brain physiological state. Also, a migration model of extracellular fluid in brain has been constructed, however the condition that the total volume of intracranial cavity is almost changeless due to the hardness of cranial bone has not been considered in that model. Therefore, in this research, the dynamic migration model of extracellular fluid in brain was constructed on the consideration of the changelessness of intracranial cavity’s total volume. This model is connectable to the cerebral blood circulation model. The constructed model consists of fourteen compartments, twelve of which corresponds to perfused area of bilateral anterior, middle and posterior cerebral arteries, the others corresponds to cerebral ventricles and subarachnoid space. This model enable to calculate the migration of tissue fluid from capillaries to gray matter and white matter, the flow of tissue fluid between compartments, the production and absorption of cerebrospinal fluid at choroid plexus and arachnoid granulation, and the production of metabolic water. Further, the volume, the colloid concentration, and the tissue pressure of/in each compartment are also calculable by solving 40-dimensional non-linear simultaneous differential equations. In this research, the obtained model was analyzed for its validation under the four condition of a normal adult, an adult with higher cerebral capillary pressure, an adult with lower cerebral capillary pressure, and an adult with lower colloid concentration in cerebral capillary. In the result, calculated fluid flow, tissue volume, colloid concentration, and tissue pressure were all converged to suitable value for the set condition within 60 minutes at a maximum. Also, because these results were not conflict with prior knowledge, it is certain that the model can enough represent physiological state of brain under such limited conditions at least. One of next challenges is to integrate this model and the already constructed cerebral blood circulation model. This modification enable to simulate CBF and ICP more precisely due to calculating the effect of blood pressure change to extracellular fluid migration and that of ICP change to CBF.

Keywords: dynamic model, cerebral extracellular migration, brain resuscitation, automatic control

Procedia PDF Downloads 156
450 Community Policing: Exploring the Police and Community Participation for Crime Control in Bia West of Ghana

Authors: Bertha Korang Gyimah, Obed Asamoah, Kenross, T. Asamoah

Abstract:

In every human community, crimes or offenses cannot be eliminated, but as crimes are expected, there should be bodies that will control and prevent the crimes. There has been an increasing rate of crime, such as armed robbery, kidnapping, murder, and other forms of violence in the country. Community participation in crime control cannot be left out in Ghana. Several works have been conducted to deal with the importance of community participation in policing, but the causes of communities not fully participating in community policing have been left out. The main aim of the research was to assess the impact of community policing and why the communities are reluctant to partake in community policing to help control crime in Bia West. There have been perceptions about Police that, they expose informant after they give the police tip-off which put the whistleblower life in danger. This has made the community not to get involved in security issues in the community they live in. This situation has posed a serious threat to the Ghana Police Service and its ability to position itself strategically in order to carry out a perfect investigation to bring the perpetrators into custody and to protect their lives and property, as well as the maintenance of law and order. Due to less data on community participation in the Ghana Police Service, the research adopted an interpretative framework to assess the meaning connoted to community policing from the perspectives of the stakeholders themselves. The qualitative research method was used. There was an engagement of the police and community where focus group discussions and individual in-depth interviews were organized in the randomly selected communities in the district. Key informant interviews were used to solicit views of the people why they are reluctant to give information to the police to help them take the perpetrators to book. In the data collected, it was observed that most of the people have been under threats of offenders after they come back from the prisons, it was also observed that some of the unprofessional police personnel’s expose the whistleblowers who put their lives in danger. The data obtained were analyzed using simple Analytical tool SPSS and Excel. Based on the analysis, it was observed that a high number of people in the communities contacted had not made their mind to participate in any security issues. Based on the views of the community, there should be a high level of professionalism in the recruitment system of the Ghana police service to come out with professional police officers who can abide by the rules and regulations governing the profession.

Keywords: community, bia west, Ghana, participation, police

Procedia PDF Downloads 142
449 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 146
448 Nigerian Football System: Examining Meso-Level Practices against a Global Model for Integrated Development of Mass and Elite Sport

Authors: I. Derek Kaka’an, P. Smolianov, D. Koh Choon Lian, S. Dion, C. Schoen, J. Norberg

Abstract:

This study was designed to examine mass participation and elite football performance in Nigeria with reference to advance international football management practices. Over 200 sources of literature on sport delivery systems were analyzed to construct a globally applicable model of elite football integrated with mass participation, comprising of the following three levels: macro- (socio-economic, cultural, legislative, and organizational), meso- (infrastructures, personnel, and services enabling sport programs) and micro-level (operations, processes, and methodologies for development of individual athletes). The model has received scholarly validation and showed to be a framework for program analysis that is not culturally bound. The Smolianov and Zakus model has been employed for further understanding of sport systems such as US soccer, US Rugby, swimming, tennis, and volleyball as well as Russian and Dutch swimming. A questionnaire was developed using the above-mentioned model. Survey questions were validated by 12 experts including academicians, executives from sport governing bodies, football coaches, and administrators. To identify best practices and determine areas for improvement of football in Nigeria, 120 coaches completed the questionnaire. Useful exemplars and possible improvements were further identified through semi-structured discussions with 10 Nigerian football administrators and experts. Finally, content analysis of Nigeria Football Federation’s website and organizational documentation was conducted. This paper focuses on the meso-level of Nigerian football delivery, particularly infrastructures, personnel, and services enabling sport programs. This includes training centers, competition systems, and intellectual services. Results identified remarkable achievements coupled with great potential to further develop football in different types of public and private organizations in Nigeria. These include: assimilating football competitions with other cultural and educational activities, providing favorable conditions for employees of all possible organizations to partake and help in managing football programs and events, providing football coaching integrated with counseling for prevention of antisocial conduct, and improving cooperation between football programs and organizations for peace-making and advancement of international relations, tourism, and socio-economic development. Accurate reporting of the sports programs from the media should be encouraged through staff training for better awareness of various events. The systematic integration of these meso-level practices into the balanced development of mass and high-performance football will contribute to international sport success as well as national health, education, and social harmony.

Keywords: football, high performance, mass participation, Nigeria, sport development

Procedia PDF Downloads 252
447 Exploiting the Potential of Fabric Phase Sorptive Extraction for Forensic Food Safety: Analysis of Food Samples in Cases of Drug Facilitated Crimes

Authors: Bharti Jain, Rajeev Jain, Abuzar Kabir, Torki Zughaibi, Shweta Sharma

Abstract:

Drug-facilitated crimes (DFCs) entail the use of a single drug or a mixture of drugs to render a victim unable. Traditionally, biological samples have been gathered from victims and conducted analysis to establish evidence of drug administration. Nevertheless, the rapid metabolism of various drugs and delays in analysis can impede the identification of such substances. For this, the present article describes a rapid, sustainable, highly efficient and miniaturized protocol for the identification and quantification of three sedative-hypnotic drugs, namely diazepam, chlordiazepoxide and ketamine in alcoholic beverages and complex food samples (cream of biscuit, flavored milk, juice, cake, tea, sweets and chocolate). The methodology involves utilizing fabric phase sorptive extraction (FPSE) to extract diazepam (DZ), chlordiazepoxide (CDP), and ketamine (KET). Subsequently, the extracted samples are subjected to analysis using gas chromatography-mass spectrometry (GC-MS). Several parameters, including the type of membrane, pH, agitation time and speed, ionic strength, sample volume, elution volume and time, and type of elution solvent, were screened and thoroughly optimized. Sol-gel Carbowax 20M (CW-20M) has demonstrated the most effective extraction efficiency for the target analytes among all evaluated membranes. Under optimal conditions, the method displayed linearity within the range of 0.3–10 µg mL–¹ (or µg g–¹), exhibiting a coefficient of determination (R2) ranging from 0.996–0.999. The limits of detection (LODs) and limits of quantification (LOQs) for liquid samples range between 0.020-0.069 µg mL-¹ and 0.066-0.22 µg mL-¹, respectively. Correspondingly, the LODs for solid samples ranged from 0.056-0.090 µg g-¹, while the LOQs ranged from 0.18-0.29 µg g-¹. Notably, the method showcased better precision, with repeatability and reproducibility both below 5% and 10%, respectively. Furthermore, the FPSE-GC-MS method proved effective in determining diazepam (DZ) in forensic food samples connected to drug-facilitated crimes (DFCs). Additionally, the proposed method underwent evaluation for its whiteness using the RGB12 algorithm.

Keywords: drug facilitated crime, fabric phase sorptive extraction, food forensics, white analytical chemistry

Procedia PDF Downloads 70
446 Analysis and Optimized Design of a Packaged Liquid Chiller

Authors: Saeed Farivar, Mohsen Kahrom

Abstract:

The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.

Keywords: optimization, packaged liquid chiller, performance, simulation

Procedia PDF Downloads 278