Search results for: prediction of publications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2636

Search results for: prediction of publications

86 Management of Non-Revenue Municipal Water

Authors: Habib Muhammetoglu, I. Ethem Karadirek, Selami Kara, Ayse Muhammetoglu

Abstract:

The problem of non-revenue water (NRW) from municipal water distribution networks is common in many countries such as Turkey, where the average yearly water losses are around 50% . Water losses can be divided into two major types namely: 1) Real or physical water losses, and 2) Apparent or commercial water losses. Total water losses in Antalya city, Turkey is around 45%. Methods: A research study was conducted to develop appropriate methodologies to reduce NRW. A pilot study area of about 60 thousands inhabitants was chosen to apply the study. The pilot study area has a supervisory control and data acquisition (SCADA) system for the monitoring and control of many water quantity and quality parameters at the groundwater drinking wells, pumping stations, distribution reservoirs, and along the water mains. The pilot study area was divided into 18 District Metered Areas (DMAs) with different number of service connections that ranged between a few connections to less than 3000 connections. The flow rate and water pressure to each DMA were on-line continuously measured by an accurate flow meter and water pressure meter that were connected to the SCADA system. Customer water meters were installed to all billed and unbilled water users. The monthly water consumption as given by the water meters were recorded regularly. Water balance was carried out for each DMA using the well-know standard IWA approach. There were considerable variations in the water losses percentages and the components of the water losses among the DMAs of the pilot study area. Old Class B customer water meters at one DMA were replaced by more accurate new Class C water meters. Hydraulic modelling using the US-EPA EPANET model was carried out in the pilot study area for the prediction of water pressure variations at each DMA. The data sets required to calibrate and verify the hydraulic model were supplied by the SCADA system. It was noticed that a number of the DMAs exhibited high water pressure values. Therefore, pressure reducing valves (PRV) with constant head were installed to reduce the pressure up to a suitable level that was determined by the hydraulic model. On the other hand, the hydraulic model revealed that the water pressure at the other DMAs cannot be reduced when complying with the minimum pressure requirement (3 bars) as stated by the related standards. Results: Physical water losses were reduced considerably as a result of just reducing water pressure. Further physical water losses reduction was achieved by applying acoustic methods. The results of the water balances helped in identifying the DMAs that have considerable physical losses. Many bursts were detected especially in the DMAs that have high physical water losses. The SCADA system was very useful to assess the efficiency level of this method and to check the quality of repairs. Regarding apparent water losses reduction, changing the customer water meters resulted in increasing water revenue by more than 20%. Conclusions: DMA, SCADA, modelling, pressure management, leakage detection and accurate customer water meters are efficient for NRW.

Keywords: NRW, water losses, pressure management, SCADA, apparent water losses, urban water distribution networks

Procedia PDF Downloads 405
85 42CrMo4 Steel Flow Behavior Characterization for High Temperature Closed Dies Hot Forging in Automotive Components Applications

Authors: O. Bilbao, I. Loizaga, F. A. Girot, A. Torregaray

Abstract:

The current energetical situation and the high competitiveness in industrial sectors as the automotive one have become the development of new manufacturing processes with less energy and raw material consumption a real necessity. As consequence, new forming processes related with high temperature hot forging in closed dies have emerged in the last years as new solutions to expand the possibilities of hot forging and iron casting in the automotive industry. These technologies are mid-way between hot forging and semi-solid metal processes, working at temperatures higher than the hot forging but below the solidus temperature or the semi solid range, where no liquid phase is expected. This represents an advantage comparing with semi-solid forming processes as thixoforging, by the reason that no so high temperatures need to be reached in the case of high melting point alloys as steels, reducing the manufacturing costs and the difficulties associated to semi-solid processing of them. Comparing with hot forging, this kind of technologies allow the production of parts with as forged properties and more complex and near-net shapes (thinner sidewalls), enhancing the possibility of designing lightweight components. From the process viewpoint, the forging forces are significantly decreased, and a significant reduction of the raw material, energy consumption, and the forging steps have been demonstrated. Despite the mentioned advantages, from the material behavior point of view, the expansion of these technologies has shown the necessity of developing new material flow behavior models in the process working temperature range to make the simulation or the prediction of these new forming processes feasible. Moreover, the knowledge of the material flow behavior at the working temperature range also allows the design of the new closed dies concept required. In this work, the flow behavior characterization in the mentioned temperature range of the widely used in automotive commercial components 42CrMo4 steel has been studied. For that, hot compression tests have been carried out in a thermomechanical tester in a temperature range that covers the material behavior from the hot forging until the NDT (Nil Ductility Temperature) temperature (1250 ºC, 1275 ºC, 1300 ºC, 1325 ºC, 1350ºC, and 1375 ºC). As for the strain rates, three different orders of magnitudes have been considered (0,1 s-1, 1s-1, and 10s-1). Then, results obtained from the hot compression tests have been treated in order to adapt or re-write the Spittel model, widely used in automotive commercial softwares as FORGE® that restrict the current existing models up to 1250ºC. Finally, the obtained new flow behavior model has been validated by the process simulation in a commercial automotive component and the comparison of the results of the simulation with the already made experimental tests in a laboratory cellule of the new technology. So as a conclusion of the study, a new flow behavior model for the 42CrMo4 steel in the new working temperature range and the new process simulation in its application in automotive commercial components has been achieved and will be shown.

Keywords: 42CrMo4 high temperature flow behavior, high temperature hot forging in closed dies, simulation of automotive commercial components, spittel flow behavior model

Procedia PDF Downloads 129
84 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 121
83 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 157
82 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming

Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter

Abstract:

High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.

Keywords: hyperelastic, anisotropic, polymer film, thermoforming

Procedia PDF Downloads 617
81 The Effects of Goal Setting and Feedback on Inhibitory Performance

Authors: Mami Miyasaka, Kaichi Yanaoka

Abstract:

Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity; symptoms often manifest during childhood. In children with ADHD, the development of inhibitory processes is impaired. Inhibitory control allows people to avoid processing unnecessary stimuli and to behave appropriately in various situations; thus, people with ADHD require interventions to improve inhibitory control. Positive or negative reinforcements (i.e., reward or punishment) help improve the performance of children with such difficulties. However, in order to optimize impact, reward and punishment must be presented immediately following the relevant behavior. In regular elementary school classrooms, such supports are uncommon; hence, an alternative practical intervention method is required. One potential intervention involves setting goals to keep children motivated to perform tasks. This study examined whether goal setting improved inhibitory performances, especially for children with severe ADHD-related symptoms. We also focused on giving feedback on children's task performances. We expected that giving children feedback would help them set reasonable goals and monitor their performance. Feedback can be especially effective for children with severe ADHD-related symptoms because they have difficulty monitoring their own performance, perceiving their errors, and correcting their behavior. Our prediction was that goal setting by itself would be effective for children with mild ADHD-related symptoms, and goal setting based on feedback would be effective for children with severe ADHD-related symptoms. Japanese elementary school children and their parents were the sample for this study. Children performed two kinds of go/no-go tasks, and parents completed a checklist about their children's ADHD symptoms, the ADHD Rating Scale-IV, and the Conners 3rd edition. The go/no-go task is a cognitive task to measure inhibitory performance. Children were asked to press a key on the keyboard when a particular symbol appeared on the screen (go stimulus) and to refrain from doing so when another symbol was displayed (no-go stimulus). Errors obtained in response to a no-go stimulus indicated inhibitory impairment. To examine the effect of goal-setting on inhibitory control, 37 children (Mage = 9.49 ± 0.51) were required to set a performance goal, and 34 children (Mage = 9.44 ± 0.50) were not. Further, to manipulate the presence of feedback, in one go/no-go task, no information about children’s scores was provided; however, scores were revealed for the other type of go/no-go tasks. The results revealed a significant interaction between goal setting and feedback. However, three-way interaction between ADHD-related inattention, feedback, and goal setting was not significant. These results indicated that goal setting was effective for improving the performance of the go/no-go task only with feedback, regardless of ADHD severity. Furthermore, we found an interaction between ADHD-related inattention and feedback, indicating that informing inattentive children of their scores made them unexpectedly more impulsive. Taken together, giving feedback was, unexpectedly, too demanding for children with severe ADHD-related symptoms, but the combination of goal setting with feedback was effective for improving their inhibitory control. We discuss effective interventions for children with ADHD from the perspective of goal setting and feedback. This work was supported by the 14th Hakuho Research Grant for Child Education of the Hakuho Foundation.

Keywords: attention deficit disorder with hyperactivity, feedback, goal-setting, go/no-go task, inhibitory control

Procedia PDF Downloads 104
80 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 66
79 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia

Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger

Abstract:

Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.

Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia

Procedia PDF Downloads 74
78 Comparison of Incidence and Risk Factors of Early Onset and Late Onset Preeclampsia: A Population Based Cohort Study

Authors: Sadia Munir, Diana White, Aya Albahri, Pratiwi Hastania, Eltahir Mohamed, Mahmood Khan, Fathima Mohamed, Ayat Kadhi, Haila Saleem

Abstract:

Preeclampsia is a major complication of pregnancy. Prediction and management of preeclampsia is a challenge for obstetricians. To our knowledge, no major progress has been achieved in the prevention and early detection of preeclampsia. There is very little known about the clear treatment path of this disorder. Preeclampsia puts both mother and baby at risk of several short term- and long term-health problems later in life. There is huge health service cost burden in the health care system associated with preeclampsia and its complications. Preeclampsia is divided into two different types. Early onset preeclampsia develops before 34 weeks of gestation, and late onset develops at or after 34 weeks of gestation. Different genetic and environmental factors, prognosis, heritability, biochemical and clinical features are associated with early and late onset preeclampsia. Prevalence of preeclampsia greatly varies all over the world and is dependent on ethnicity of the population and geographic region. To authors best knowledge, no published data on preeclampsia exist in Qatar. In this study, we are reporting the incidence of preeclampsia in Qatar. The purpose of this study is to compare the incidence and risk factors of both early onset and late onset preeclampsia in Qatar. This retrospective longitudinal cohort study was conducted using data from the hospital record of Women’s Hospital, Hamad Medical Corporation (HMC), from May 2014-May 2016. Data collection tool, which was approved by HMC, was a researcher made extraction sheet that included information such as blood pressure during admission, socio demographic characteristics, delivery mode, and new born details. A total of 1929 patients’ files were identified by the hospital information management when they apply codes of preeclampsia. Out of 1929 files, 878 had significant gestational hypertension without proteinuria, 365 had preeclampsia, 364 had severe preeclampsia, and 188 had preexisting hypertension with superimposed proteinuria. In this study, 78% of the data was obtained by hospital electronic system (Cerner) and the remaining 22% was from patient’s paper records. We have gone through detail data extraction from 560 files. Initial data analysis has revealed that 15.02% of pregnancies were complicated with preeclampsia from May 2014-May 2016. We have analyzed difference in the two different disease entities in the ethnicity, maternal age, severity of hypertension, mode of delivery and infant birth weight. We have identified promising differences in the risk factors of early onset and late onset preeclampsia. The data from clinical findings of preeclampsia will contribute to increased knowledge about two different disease entities, their etiology, and similarities/differences. The findings of this study can also be used in predicting health challenges, improving health care system, setting up guidelines, and providing the best care for women suffering from preeclampsia.

Keywords: preeclampsia, incidence, risk factors, maternal

Procedia PDF Downloads 141
77 Accelerating Personalization Using Digital Tools to Drive Circular Fashion

Authors: Shamini Dhana, G. Subrahmanya VRK Rao

Abstract:

The fashion industry is advancing towards a mindset of zero waste, personalization, creativity, and circularity. The trend of upcycling clothing and materials into personalized fashion is being demanded by the next generation. There is a need for a digital tool to accelerate the process towards mass customization. Dhana’s D/Sphere fashion technology platform uses digital tools to accelerate upcycling. In essence, advanced fashion garments can be designed and developed via reuse, repurposing, recreating activities, and using existing fabric and circulating materials. The D/Sphere platform has the following objectives: to provide (1) An opportunity to develop modern fashion using existing, finished materials and clothing without chemicals or water consumption; (2) The potential for an everyday customer and designer to use the medium of fashion for creative expression; (3) A solution to address the global textile waste generated by pre- and post-consumer fashion; (4) A solution to reduce carbon emissions, water, and energy consumption with the participation of all stakeholders; (5) An opportunity for brands, manufacturers, retailers to work towards zero-waste designs and as an alternative revenue stream. Other benefits of this alternative approach include sustainability metrics, trend prediction, facilitation of disassembly and remanufacture deep learning, and hyperheuristics for high accuracy. A design tool for mass personalization and customization utilizing existing circulating materials and deadstock, targeted to fashion stakeholders will lower environmental costs, increase revenues through up to date upcycled apparel, produce less textile waste during the cut-sew-stitch process, and provide a real design solution for the end customer to be part of circular fashion. The broader impact of this technology will result in a different mindset to circular fashion, increase the value of the product through multiple life cycles, find alternatives towards zero waste, and reduce the textile waste that ends up in landfills. This technology platform will be of interest to brands and companies that have the responsibility to reduce their environmental impact and contribution to climate change as it pertains to the fashion and apparel industry. Today, over 70% of the $3 trillion fashion and apparel industry ends up in landfills. To this extent, the industry needs such alternative techniques to both address global textile waste as well as provide an opportunity to include all stakeholders and drive circular fashion with new personalized products. This type of modern systems thinking is currently being explored around the world by the private sector, organizations, research institutions, and governments. This technological innovation using digital tools has the potential to revolutionize the way we look at communication, capabilities, and collaborative opportunities amongst stakeholders in the development of new personalized and customized products, as well as its positive impacts on society, our environment, and global climate change.

Keywords: circular fashion, deep learning, digital technology platform, personalization

Procedia PDF Downloads 65
76 Effects of Exposure to a Language on Perception of Non-Native Phonologically Contrastive Duration

Authors: Chuyu Huang, Itsuki Minemi, Kuanlin Chen, Yuki Hirose

Abstract:

It remains unclear how language speakers are able to perceive phonological contrasts that do not exist on their own. This experiment uses the vowel-length distinction in Japanese, which is phonologically contrastive and co-occurs with tonal change in some cases. For speakers whose first language does not distinguish vowel length, contrastive duration is usually misperceived, e.g., Mandarin speakers. Two alternative hypotheses for how Mandarin speakers would perceive a phonological contrast that does not exist in their language make different predictions. The stress parameter model does not have a clear prediction about the impact of tonal type. Mandarin speakers will likely be not able to perceive vowel length as well as Japanese native speakers do, but the performance might not correlate to tonal type because the prosody of their language is distinctive, which requires users to encode lexical prosody and notice subtle differences in word prosody. By contrast, cue-based phonetic models predict that Mandarin speakers may rely on pitch differences, a secondary cue, to perceive vowel length. Two groups of Mandarin speakers, including naive non-Japanese speakers and beginner learners, were recruited to participate in an AX discrimination task involving two Japanese sound stimuli that contain a phonologically contrastive environment. Participants were asked to indicate whether the two stimuli containing a vowel-length contrast (e.g., maapero vs. mapero) sound the same. The experiment was bifactorial. The first factor contrasted three syllabic positions (syllable position; initial/medial/final), as it would be likely to affect the perceptual difficulty, as seen in previous studies, and the second factor contrasted two pitch types (accent type): one with accentual change that could be distinguished with the lexical tones in Mandarin (the different condition), with the other group having no tonal distinction but only differing in vowel length (the same condition). The overall results showed that a significant main effect of accent type by applying a linear mixed-effects model (β = 1.48, SE = 0.35, p < 0.05), which implies that Mandarin speakers tend to more successfully recognize vowel-length differences when the long vowel counterpart takes on a tone that exists in Mandarin. The interaction between the accent type and the syllabic position is also significant (β = 2.30, SE = 0.91, p < 0.05), showing that vowel lengths in the different conditions are more difficult to recognize in the word-final case relative to the initial condition. The second statistical model, which compares naive speakers to beginners, was conducted with logistic regression to test the effects of the participant group. A significant difference was found between the two groups (β = 1.06, 95% CI = [0.36, 2.03], p < 0.05). This study shows that: (1) Mandarin speakers are likely to use pitch cues to perceive vowel length in a non-native language, which is consistent with the cue-based approaches; (2) an exposure effect was observed: the beginner group achieved a higher accuracy for long vowel perception, which implied the exposure effect despite the short period of language learning experience.

Keywords: cue-based perception, exposure effect, prosodic perception, vowel duration

Procedia PDF Downloads 220
75 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters

Authors: Dylan Santos De Pinho, Nabil Ouerhani

Abstract:

Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.

Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization

Procedia PDF Downloads 147
74 Altering the Solid Phase Speciation of Arsenic in Paddy Soil: An Approach to Reduce Rice Grain Arsenic Uptake

Authors: Supriya Majumder, Pabitra Banik

Abstract:

Fates of Arsenic (As) on the soil-plant environment belong to the critical emerging issue, which in turn to appraises the threatening implications of a human health risk — assessing the dynamics of As in soil solid components are likely to impose its potential availability towards plant uptake. In the present context, we introduced an improved Sequential Extraction Procedure (SEP) questioning to identify solid-phase speciation of As in paddy soil under variable soil environmental conditions during two consecutive seasons of rice cultivation practices. We coupled gradients of water management practices with the addition of fertilizer amendments to assess the changes in a partition of As through a field experimental study during monsoon and post-monsoon season using two rice cultivars. Water management regimes were varied based on the methods of cultivation of rice by Conventional (waterlogged) vis-a-vis System of Rice Intensification-SRI (saturated). Fertilizer amendment through the nutrient treatment of absolute control, NPK-RD, NPK-RD + Calcium silicate, NPK-RD + Ferrous sulfate, Farmyard manure (FYM), FYM + Calcium silicate, FYM + Ferrous sulfate, Vermicompost (VC), VC + Calcium silicate, VC + Ferrous sulfate were selected to construct the study. After harvest, soil samples were sequentially extracted to estimate partition of As among the different fractions such as: exchangeable (F1), specifically sorbed (F2), As bound to amorphous Fe oxides (F3), crystalline Fe oxides (F4), organic matter (F5) and residual phase (F6). Results showed that the major proportions of As were found in F3, F4 and F6, whereas F1 exhibited the lowest proportion of total soil As. Among the nutrient treatment mediated changes on As fractions, the application of organic manure and ferrous sulfate were significantly found to restrict the release of As from exchangeable phase. Meanwhile, conventional practice produced much higher release of As from F1 as compared to SRI, which may substantially increase the environmental risk. In contrast, SRI practice was found to retain a significantly higher proportion of As in F2, F3, and F4 phase resulting restricted mobilization of As. This was critically reflected towards rice grain As bioavailability where the reduction in grain As concentration of 33% and 55% in SRI concerning conventional treatment (p <0.05) during monsoon and post-monsoon season respectively. Also, prediction assay for rice grain As bioavailability based on the linear regression model was performed. Results demonstrated that rice grain As concentration was positively correlated with As concentration in F1 and negatively correlated with F2, F3, and F4 with a satisfactory level of variation being explained (p <0.001). Finally, we conclude that F1, F2, F3 and F4 are the major soil. As fractions critically may govern the potential availability of As in soil and suggest that rice cultivation with the SRI treatment is particularly at less risk of As availability in soil. Such exhaustive information may be useful for adopting certain management practices for rice grown in contaminated soil concerning to the environmental issues in particular.

Keywords: arsenic, fractionation, paddy soil, potential availability

Procedia PDF Downloads 123
73 Machine Learning Analysis of Eating Disorders Risk, Physical Activity and Psychological Factors in Adolescents: A Community Sample Study

Authors: Marc Toutain, Pascale Leconte, Antoine Gauthier

Abstract:

Introduction: Eating Disorders (ED), such as anorexia, bulimia, and binge eating, are psychiatric illnesses that mostly affect young people. The main symptoms concern eating (restriction, excessive food intake) and weight control behaviors (laxatives, vomiting). Psychological comorbidities (depression, executive function disorders, etc.) and problematic behaviors toward physical activity (PA) are commonly associated with ED. Acquaintances on ED risk factors are still lacking, and more community sample studies are needed to improve prevention and early detection. To our knowledge, studies are needed to specifically investigate the link between ED risk level, PA, and psychological risk factors in a community sample of adolescents. The aim of this study is to assess the relation between ED risk level, exercise (type, frequency, and motivations for engaging in exercise), and psychological factors based on the Jacobi risk factors model. We suppose that a high risk of ED will be associated with the practice of high caloric cost PA, motivations oriented to weight and shape control, and psychological disturbances. Method: An online survey destined for students has been sent to several middle schools and colleges in northwest France. This survey combined several questionnaires, the Eating Attitude Test-26 assessing ED risk; the Exercise Motivation Inventory–2 assessing motivations toward PA; the Hospital Anxiety and Depression Scale assessing anxiety and depression, the Contour Drawing Rating Scale; and the Body Esteem Scale assessing body dissatisfaction, Rosenberg Self-esteem Scale assessing self-esteem, the Exercise Dependence Scale-Revised assessing PA dependence, the Multidimensional Assessment of Interoceptive Awareness assessing interoceptive awareness and the Frost Multidimensional Perfectionism Scale assessing perfectionism. Machine learning analysis will be performed in order to constitute groups with a tree-based model clustering method, extract risk profile(s) with a bootstrap method comparison, and predict ED risk with a prediction method based on a decision tree-based model. Expected results: 1044 complete records have already been collected, and the survey will be closed at the end of May 2022. Records will be analyzed with a clustering method and a bootstrap method in order to reveal risk profile(s). Furthermore, a predictive tree decision method will be done to extract an accurate predictive model of ED risk. This analysis will confirm typical main risk factors and will give more data on presumed strong risk factors such as exercise motivations and interoceptive deficit. Furthermore, it will enlighten particular risk profiles with a strong level of proof and greatly contribute to improving the early detection of ED and contribute to a better understanding of ED risk factors.

Keywords: eating disorders, risk factors, physical activity, machine learning

Procedia PDF Downloads 83
72 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method

Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola

Abstract:

The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.

Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization

Procedia PDF Downloads 389
71 The Association between Attachment Styles, Satisfaction of Life, Alexithymia, and Psychological Resilience: The Mediational Role of Self-Esteem

Authors: Zahide Tepeli Temiz, Itir Tari Comert

Abstract:

Attachment patterns based on early emotional interactions between infant and primary caregiver continue to be influential in adult life, in terms of mental health and behaviors of individuals. Several studies reveal that infant-caregiver relationships have impressed the affect regulation, coping with stressful and negative situations, general satisfaction of life, and self image in adulthood, besides the attachment styles. The present study aims to examine the relationships between university students’ attachment style and their self-esteem, alexithymic features, satisfaction of life, and level of resilience. In line with this aim, the hypothesis of the prediction of attachment styles (anxious and avoidant) over life satisfaction, self-esteem, alexithymia, and psychological resilience was tested. Additionally, in this study Structural Equational Modeling was conducted to investigate the mediational role of self-esteem in the relationship between attachment styles and alexithymia, life satisfaction, and resilience. This model was examined with path analysis. The sample of the research consists of 425 university students who take education from several region of Turkey. The participants who sign the informed consent completed the Demographic Information Form, Experiences in Close Relationships-Revised, Rosenberg Self-Esteem Scale, The Satisfaction with Life Scale, Toronto Alexithymia Scale, and Resilience Scale for Adults. According to results, anxious, and avoidant dimensions of insecure attachment predicted the self-esteem score and alexithymia in positive direction. On the other hand, these dimensions of attachment predicted life satisfaction in negative direction. The results of linear regression analysis indicated that anxious and avoidant attachment styles didn’t predict the resilience. This result doesn’t support the theory and research indicating the relationship between attachment style and psychological resilience. The results of path analysis revealed the mediational role self esteem in the relation between anxious, and avoidant attachment styles and life satisfaction. In addition, SEM analysis indicated the indirect effect of attachment styles over alexithymia and resilience besides their direct effect. These findings support the hypothesis of this research relation to mediating role of self-esteem. Attachment theorists suggest that early attachment experiences, including supportive and responsive family interactions, have an effect on resilience to harmful situations in adult life, ability to identify, describe, and regulate emotions and also general satisfaction with life. Several studies examining the relationship between attachment styles and life satisfaction, alexithymia, and psychological resilience draw attention to mediational role of self-esteem. Results of this study support the theory of attachment patterns with the mediation of self-image influence the emotional, cognitive, and behavioral regulation of person throughout the adulthood. Therefore, it is thought that any intervention intended for recovery in attachment relationship will increase the self-esteem, life satisfaction, and resilience level, on the one side, decrease the alexithymic features, on the other side.

Keywords: alexithymia, anxious attachment, avoidant attachment, life satisfaction, path analysis, resilience, self-esteem, structural equation

Procedia PDF Downloads 195
70 Radiation Stability of Structural Steel in the Presence of Hydrogen

Authors: E. A. Krasikov

Abstract:

As the service life of an operating nuclear power plant (NPP) increases, the potential misunderstanding of the degradation of aging components must receive more attention. Integrity assurance analysis contributes to the effective maintenance of adequate plant safety margins. In essence, the reactor pressure vessel (RPV) is the key structural component determining the NPP lifetime. Environmentally induced cracking in the stainless steel corrosion-preventing cladding of RPV’s has been recognized to be one of the technical problems in the maintenance and development of light-water reactors. Extensive cracking leading to failure of the cladding was found after 13000 net hours of operation in JPDR (Japan Power Demonstration Reactor). Some of the cracks have reached the base metal and further penetrated into the RPV in the form of localized corrosion. Failures of reactor internal components in both boiling water reactors and pressurized water reactors have increased after the accumulation of relatively high neutron fluences (5´1020 cm–2, E>0,5MeV). Therefore, in the case of cladding failure, the problem arises of hydrogen (as a corrosion product) embrittlement of irradiated RPV steel because of exposure to the coolant. At present when notable progress in plasma physics has been obtained practical energy utilization from fusion reactors (FR) is determined by the state of material science problems. The last includes not only the routine problems of nuclear engineering but also a number of entirely new problems connected with extreme conditions of materials operation – irradiation environment, hydrogenation, thermocycling, etc. Limiting data suggest that the combined effect of these factors is more severe than any one of them alone. To clarify the possible influence of the in-service synergistic phenomena on the FR structural materials properties we have studied hydrogen-irradiated steel interaction including alternating hydrogenation and heat treatment (annealing). Available information indicates that the life of the first wall could be expanded by means of periodic in-place annealing. The effects of neutron fluence and irradiation temperature on steel/hydrogen interactions (adsorption, desorption, diffusion, mechanical properties at different loading velocities, post-irradiation annealing) were studied. Experiments clearly reveal that the higher the neutron fluence and the lower the irradiation temperature, the more hydrogen-radiation defects occur, with corresponding effects on the steel mechanical properties. Hydrogen accumulation analyses and thermal desorption investigations were performed to prove the evidence of hydrogen trapping at irradiation defects. Extremely high susceptibility to hydrogen embrittlement was observed with specimens which had been irradiated at relatively low temperature. However, the susceptibility decreases with increasing irradiation temperature. To evaluate methods for the RPV’s residual lifetime evaluation and prediction, more work should be done on the irradiated metal–hydrogen interaction in order to monitor more reliably the status of irradiated materials.

Keywords: hydrogen, radiation, stability, structural steel

Procedia PDF Downloads 270
69 Estimation of State of Charge, State of Health and Power Status for the Li-Ion Battery On-Board Vehicle

Authors: S. Sabatino, V. Calderaro, V. Galdi, G. Graber, L. Ippolito

Abstract:

Climate change is a rapidly growing global threat caused mainly by increased emissions of carbon dioxide (CO₂) into the atmosphere. These emissions come from multiple sources, including industry, power generation, and the transport sector. The need to tackle climate change and reduce CO₂ emissions is indisputable. A crucial solution to achieving decarbonization in the transport sector is the adoption of electric vehicles (EVs). These vehicles use lithium (Li-Ion) batteries as an energy source, making them extremely efficient and with low direct emissions. However, Li-Ion batteries are not without problems, including the risk of overheating and performance degradation. To ensure its safety and longevity, it is essential to use a battery management system (BMS). The BMS constantly monitors battery status, adjusts temperature and cell balance, ensuring optimal performance and preventing dangerous situations. From the monitoring carried out, it is also able to optimally manage the battery to increase its life. Among the parameters monitored by the BMS, the main ones are State of Charge (SoC), State of Health (SoH), and State of Power (SoP). The evaluation of these parameters can be carried out in two ways: offline, using benchtop batteries tested in the laboratory, or online, using batteries installed in moving vehicles. Online estimation is the preferred approach, as it relies on capturing real-time data from batteries while operating in real-life situations, such as in everyday EV use. Actual battery usage conditions are highly variable. Moving vehicles are exposed to a wide range of factors, including temperature variations, different driving styles, and complex charge/discharge cycles. This variability is difficult to replicate in a controlled laboratory environment and can greatly affect performance and battery life. Online estimation captures this variety of conditions, providing a more accurate assessment of battery behavior in real-world situations. In this article, a hybrid approach based on a neural network and a statistical method for real-time estimation of SoC, SoH, and SoP parameters of interest is proposed. These parameters are estimated from the analysis of a one-day driving profile of an electric vehicle, assumed to be divided into the following four phases: (i) Partial discharge (SoC 100% - SoC 50%), (ii) Partial discharge (SoC 50% - SoC 80%), (iii) Deep Discharge (SoC 80% - SoC 30%) (iv) Full charge (SoC 30% - SoC 100%). The neural network predicts the values of ohmic resistance and incremental capacity, while the statistical method is used to estimate the parameters of interest. This reduces the complexity of the model and improves its prediction accuracy. The effectiveness of the proposed model is evaluated by analyzing its performance in terms of square mean error (RMSE) and percentage error (MAPE) and comparing it with the reference method found in the literature.

Keywords: electric vehicle, Li-Ion battery, BMS, state-of-charge, state-of-health, state-of-power, artificial neural networks

Procedia PDF Downloads 67
68 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory

Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker

Abstract:

In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.

Keywords: chemical analysis, concrete, LIBS, spectroscopy

Procedia PDF Downloads 105
67 Middle School as a Developmental Context for Emergent Citizenship

Authors: Casta Guillaume, Robert Jagers, Deborah Rivas-Drake

Abstract:

Civically engaged youth are critical to maintaining and/or improving the functioning of local, national and global communities and their institutions. The present study investigated how school climate and academic beliefs (academic self-efficacy and school belonging) may inform emergent civic behaviors (emergent citizenship) among self-identified middle school youth of color (African American, Multiracial or Mixed, Latino, Asian American or Pacific Islander, Native American, and other). Study aims: 1) Understand whether and how school climate is associated with civic engagement behaviors, directly and indirectly, by fostering a positive sense of connection to the school and/or engendering feelings of self-efficacy in the academic domain. Accordingly, we examined 2) The association of youths’ sense of school connection and academic self-efficacy with their personally responsible and participatory civic behaviors in school and community contexts—both concurrently and longitudinally. Data from two subsamples of a larger study of social/emotional development among middle school students were used for longitudinal and cross sectional analysis. The cross-sectional sample included 324 6th-8th grade students, of which 43% identified as African American, 20% identified as Multiracial or Mixed, 18% identified as Latino, 12% identified as Asian American or Pacific Islander, 6% identified as Other, and 1% identified as Native American. The age of the sample ranged from 11 – 15 (M = 12.33, SD = .97). For the longitudinal test of our mediation model, we drew on data from the 6th and 7th grade cohorts only (n =232); the ethnic and racial diversity of this longitudinal subsample was virtually identical to that of the cross-sectional sample. For both the cross-sectional and longitudinal analyses, full information maximum likelihood was used to deal with missing data. Fit indices were inspected to determine if they met the recommended thresholds of RMSEA below .05 and CFI and TLI values of at least .90. To determine if particular mediation pathways were significant, the bias-corrected bootstrap confidence intervals for each indirect pathway were inspected. Fit indices for the latent variable mediation model using the cross-sectional data suggest that the hypothesized model fit the observed data well (CFI = .93; TLI =. 92; RMSEA = .05, 90% CI = [.04, .06]). In the model, students’ perceptions of school climate were significantly and positively associated with greater feelings of school connectedness, which were in turn significantly and positively associated with civic engagement. In addition, school climate was significantly and positively associated with greater academic self-efficacy, but academic self-efficacy was not significantly associated with civic engagement. Tests of mediation indicated there was one significant indirect pathway between school climate and civic engagement behavior. There was an indirect association between school climate and civic engagement via its association with sense of school connectedness, indirect association estimate = .17 [95% CI: .08, .32]. The aforementioned indirect association via school connectedness accounted for 50% (.17/.34) of the total effect. Partial support was found for the prediction that students’ perceptions of a positive school climate are linked to civic engagement in part through their role in students’ sense of connection to school.

Keywords: civic engagement, early adolescence, school climate, school belonging, developmental niche

Procedia PDF Downloads 370
66 Tectono-Stratigraphic Architecture, Depositional Systems and Salt Tectonics to Strike-Slip Faulting in Kribi-Campo-Cameroon Atlantic Margin with an Unsupervised Machine Learning Approach (West African Margin)

Authors: Joseph Bertrand Iboum Kissaaka, Charles Fonyuy Ngum Tchioben, Paul Gustave Fowe Kwetche, Jeannette Ngo Elogan Ntem, Joseph Binyet Njebakal, Ribert Yvan Makosso-Tchapi, François Mvondo Owono, Marie Joseph Ntamak-Nida

Abstract:

Located in the Gulf of Guinea, the Kribi-Campo sub-basin belongs to the Aptian salt basins along the West African Margin. In this paper, we investigated the tectono-stratigraphic architecture of the basin, focusing on the role of salt tectonics and strike-slip faults along the Kribi Fracture Zone with implications for reservoir prediction. Using 2D seismic data and well data interpreted through sequence stratigraphy with integrated seismic attributes analysis with Python Programming and unsupervised Machine Learning, at least six second-order sequences, indicating three main stages of tectono-stratigraphic evolution, were determined: pre-salt syn-rift, post-salt rift climax and post-rift stages. The pre-salt syn-rift stage with KTS1 tectonosequence (Barremian-Aptian) reveals a transform rifting along NE-SW transfer faults associated with N-S to NNE-SSW syn-rift longitudinal faults bounding a NW-SE half-graben filled with alluvial to lacustrine-fan delta deposits. The post-salt rift-climax stage (Lower to Upper Cretaceous) includes two second-order tectonosequences (KTS2 and KTS3) associated with the salt tectonics and Campo High uplift. During the rift-climax stage, the growth of salt diapirs developed syncline withdrawal basins filled by early forced regression, mid transgressive and late normal regressive systems tracts. The early rift climax underlines some fine-grained hangingwall fans or delta deposits and coarse-grained fans from the footwall of fault scarps. The post-rift stage (Paleogene to Neogene) contains at least three main tectonosequences KTS4, KTS5 and KTS6-7. The first one developed some turbiditic lobe complexes considered as mass transport complexes and feeder channel-lobe complexes cutting the unstable shelf edge of the Campo High. The last two developed submarine Channel Complexes associated with lobes towards the southern part and braided delta to tidal channels towards the northern part of the Kribi-Campo sub-basin. The reservoir distribution in the Kribi-Campo sub-basin reveals some channels, fan lobes reservoirs and stacked channels reaching up to the polygonal fault systems.

Keywords: tectono-stratigraphic architecture, Kribi-Campo sub-basin, machine learning, pre-salt sequences, post-salt sequences

Procedia PDF Downloads 56
65 The Role of Supply Chain Agility in Improving Manufacturing Resilience

Authors: Maryam Ziaee

Abstract:

This research proposes a new approach and provides an opportunity for manufacturing companies to produce large amounts of products that meet their prospective customers’ tastes, needs, and expectations and simultaneously enable manufacturers to increase their profit. Mass customization is the production of products or services to meet each individual customer’s desires to the greatest possible extent in high quantities and at reasonable prices. This process takes place at different levels such as the customization of goods’ design, assembly, sale, and delivery status, and classifies in several categories. The main focus of this study is on one class of mass customization, called optional customization, in which companies try to provide their customers with as many options as possible to customize their products. These options could range from the design phase to the manufacturing phase, or even methods of delivery. Mass customization values customers’ tastes, but it is only one side of clients’ satisfaction; on the other side is companies’ fast responsiveness delivery. It brings the concept of agility, which is the ability of a company to respond rapidly to changes in volatile markets in terms of volume and variety. Indeed, mass customization is not effectively feasible without integrating the concept of agility. To gain the customers’ satisfaction, the companies need to be quick in responding to their customers’ demands, thus highlighting the significance of agility. This research offers a different method that successfully integrates mass customization and fast production in manufacturing industries. This research is built upon the hypothesis that the success key to being agile in mass customization is to forecast demand, cooperate with suppliers, and control inventory. Therefore, the significance of the supply chain (SC) is more pertinent when it comes to this stage. Since SC behavior is dynamic and its behavior changes constantly, companies have to apply one of the predicting techniques to identify the changes associated with SC behavior to be able to respond properly to any unwelcome events. System dynamics utilized in this research is a simulation approach to provide a mathematical model among different variables to understand, control, and forecast SC behavior. The final stage is delayed differentiation, the production strategy considered in this research. In this approach, the main platform of products is produced and stocked and when the company receives an order from a customer, a specific customized feature is assigned to this platform and the customized products will be created. The main research question is to what extent applying system dynamics for the prediction of SC behavior improves the agility of mass customization. This research is built upon a qualitative approach to bring about richer, deeper, and more revealing results. The data is collected through interviews and is analyzed through NVivo software. This proposed model offers numerous benefits such as reduction in the number of product inventories and their storage costs, improvement in the resilience of companies’ responses to their clients’ needs and tastes, the increase of profits, and the optimization of productivity with the minimum level of lost sales.

Keywords: agility, manufacturing, resilience, supply chain

Procedia PDF Downloads 91
64 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting

Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan

Abstract:

El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.

Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index

Procedia PDF Downloads 154
63 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters

Authors: Jyoti Sahu, Vinay A. Juvekar

Abstract:

Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.

Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature

Procedia PDF Downloads 391
62 Isolation and Characterization of a Narrow-Host Range Aeromonas hydrophila Lytic Bacteriophage

Authors: Sumeet Rai, Anuj Tyagi, B. T. Naveen Kumar, Shubhkaramjeet Kaur, Niraj K. Singh

Abstract:

Since their discovery, indiscriminate use of antibiotics in human, veterinary and aquaculture systems has resulted in global emergence/spread of multidrug-resistant bacterial pathogens. Thus, the need for alternative approaches to control bacterial infections has become utmost important. High selectivity/specificity of bacteriophages (phages) permits the targeting of specific bacteria without affecting the desirable flora. In this study, a lytic phage (Ahp1) specific to Aeromonas hydrophila subsp. hydrophila was isolated from finfish aquaculture pond. The host range of Ahp1 range was tested against 10 isolates of A. hydrophila, 7 isolates of A. veronii, 25 Vibrio cholerae isolates, 4 V. parahaemolyticus isolates and one isolate each of V. harveyi and Salmonella enterica collected previously. Except the host A. hydrophila subsp. hydrophila strain, no lytic activity against any other bacterial was detected. During the adsorption rate and one-step growth curve analysis, 69.7% of phage particles were able to get adsorbed on host cell followed by the release of 93 ± 6 phage progenies per host cell after a latent period of ~30 min. Phage nucleic acid was extracted by column purification methods. After determining the nature of phage nucleic acid as dsDNA, phage genome was subjected to next-generation sequencing by generating paired-end (PE, 2 x 300bp) reads on Illumina MiSeq system. De novo assembly of sequencing reads generated circular phage genome of 42,439 bp with G+C content of 58.95%. During open read frame (ORF) prediction and annotation, 22 ORFs (out of 49 total predicted ORFs) were functionally annotated and rest encoded for hypothetical proteins. Proteins involved in major functions such as phage structure formation and packaging, DNA replication and repair, DNA transcription and host cell lysis were encoded by the phage genome. The complete genome sequence of Ahp1 along with gene annotation was submitted to NCBI GenBank (accession number MF683623). Stability of Ahp1 preparations at storage temperatures of 4 °C, 30 °C, and 40 °C was studied over a period of 9 months. At 40 °C storage, phage counts declined by 4 log units within one month; with a total loss of viability after 2 months. At 30 °C temperature, phage preparation was stable for < 5 months. On the other hand, phage counts decreased by only 2 log units over a period of 9 during storage at 4 °C. As some of the phages have also been reported as glycerol sensitive, the stability of Ahp1 preparations in (0%, 15%, 30% and 45%) glycerol stocks were also studied during storage at -80 °C over a period of 9 months. The phage counts decreased only by 2 log units during storage, and no significant difference in phage counts was observed at different concentrations of glycerol. The Ahp1 phage discovered in our study had a very narrow host range and it may be useful for phage typing applications. Moreover, the endolysin and holin genes in Ahp1 genome could be ideal candidates for recombinant cloning and expression of antimicrobial proteins.

Keywords: Aeromonas hydrophila, endolysin, phage, narrow host range

Procedia PDF Downloads 162
61 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes

Authors: Madushani Rodrigo, Banuka Athuraliya

Abstract:

In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.

Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16

Procedia PDF Downloads 120
60 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data

Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito

Abstract:

Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.

Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement

Procedia PDF Downloads 390
59 Toy Engagement Patterns in Infants with a Familial History of Autism Spectrum Disorder

Authors: Vanessa Do, Lauren Smith, Leslie Carver

Abstract:

It is widely known that individuals with autism spectrum disorder (ASD) may exhibit sensitivity to stimuli. Even at a young age, they tend to display stimuli-related discomfort in their behavior during play. Play serves a crucial role in a child’s early years as it helps support healthy brain development, socio-emotional skills, and adaptation to their environment There is research dedicated to studying infant preferences for toys, especially in regard to: gender preferences, the advantages of promoting play, and the caregiver’s role in their child’s play routines. However, there is a disproportionate amount of literature examining how play patterns may differ in children with sensory sensitivity, such as children diagnosed with ASD. Prior literature has studied and found supporting evidence that individuals with ASD have deficits in social communication and have increased presence of repetitive behaviors and/or restricted interests, which also display in early childhood play patterns. This study aims to examine potential differences in toy preference between infants with (FH+) and without (FH-) a familial history of ASD ages 6. 9, and 12 months old. More specifically, this study will address the question, “do FH+ infants tend to play more with toys that require less social engagement compared to FH- infants?” Infants and their caregivers were recruited and asked to engage in a free-play session in their homes that lasted approximately 5 minutes. The sessions were recorded and later coded offline for engagement behaviors categorized by toy; each toy that the infants interacted with was coded as belonging to one of 6 categories: sensory (designed to stimulate one or more senses such as light-up toys or musical toys) , construction (e.g., building blocks, rubber suction cups), vehicles (e.g., toy cars), instructional (require steps to accomplish a goal such as flip phones or books), imaginative (e.g., dolls, stuffed animals), and miscellaneous (toys that do not fit into these categories). Toy engagement was defined as the infant looking and touching the toy (ILT) or looking at the toy while their caregiver was holding it (IL-CT). Results reported include/will include the proportion of time the infant was actively engaged with the toy out of the total usable video time per subject — distractions observed during the session were excluded from analysis. Data collection is still ongoing; however, the prediction is that FH+ infants will have higher engagement with sensory and construction toys as they require the least amount of social effort. Furthermore, FH+ infants will have the least engagement with the imaginative toys as prior literature has supported the claim that individuals with ASD have a decreased likelihood to engage in play that requires pretend play and other social skills. Looking at what toys are more or less engaging to FH+ infants is important as it provides significant contributions to their healthy cognitive, social, and emotional development. As play is one of the first ways for a child to understand the complexities of the larger world, the findings of this study may help guide further research into encouraging play with toys that are more engaging and sensory-sensitive for children with ASD.

Keywords: autism engagement, children’s play, early development, free-play, infants, toy

Procedia PDF Downloads 219
58 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance

Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.

Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning

Procedia PDF Downloads 31
57 Mathematical Modeling of Avascular Tumor Growth and Invasion

Authors: Meitham Amereh, Mohsen Akbari, Ben Nadler

Abstract:

Cancer has been recognized as one of the most challenging problems in biology and medicine. Aggressive tumors are a lethal type of cancers characterized by high genomic instability, rapid progression, invasiveness, and therapeutic resistance. Their behavior involves complicated molecular biology and consequential dynamics. Although tremendous effort has been devoted to developing therapeutic approaches, there is still a huge need for new insights into the dark aspects of tumors. As one of the key requirements in better understanding the complex behavior of tumors, mathematical modeling and continuum physics, in particular, play a pivotal role. Mathematical modeling can provide a quantitative prediction on biological processes and help interpret complicated physiological interactions in tumors microenvironment. The pathophysiology of aggressive tumors is strongly affected by the extracellular cues such as stresses produced by mechanical forces between the tumor and the host tissue. During the tumor progression, the growing mass displaces the surrounding extracellular matrix (ECM), and due to the level of tissue stiffness, stress accumulates inside the tumor. The produced stress can influence the tumor by breaking adherent junctions. During this process, the tumor stops the rapid proliferation and begins to remodel its shape to preserve the homeostatic equilibrium state. To reach this, the tumor, in turn, upregulates epithelial to mesenchymal transit-inducing transcription factors (EMT-TFs). These EMT-TFs are involved in various signaling cascades, which are often associated with tumor invasiveness and malignancy. In this work, we modeled the tumor as a growing hyperplastic mass and investigated the effects of mechanical stress from surrounding ECM on tumor invasion. The invasion is modeled as volume-preserving inelastic evolution. In this framework, principal balance laws are considered for tumor mass, linear momentum, and diffusion of nutrients. Also, mechanical interactions between the tumor and ECM is modeled using Ciarlet constitutive strain energy function, and dissipation inequality is utilized to model the volumetric growth rate. System parameters, such as rate of nutrient uptake and cell proliferation, are obtained experimentally. To validate the model, human Glioblastoma multiforme (hGBM) tumor spheroids were incorporated inside Matrigel/Alginate composite hydrogel and was injected into a microfluidic chip to mimic the tumor’s natural microenvironment. The invasion structure was analyzed by imaging the spheroid over time. Also, the expression of transcriptional factors involved in invasion was measured by immune-staining the tumor. The volumetric growth, stress distribution, and inelastic evolution of tumors were predicted by the model. Results showed that the level of invasion is in direct correlation with the level of predicted stress within the tumor. Moreover, the invasion length measured by fluorescent imaging was shown to be related to the inelastic evolution of tumors obtained by the model.

Keywords: cancer, invasion, mathematical modeling, microfluidic chip, tumor spheroids

Procedia PDF Downloads 111