Search results for: airy stress function
561 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study
Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni
Abstract:
Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation
Procedia PDF Downloads 128560 Gamifying Content and Language Integrated Learning: A Study Exploring the Use of Game-Based Resources to Teach Primary Mathematics in a Second Language
Authors: Sarah Lister, Pauline Palmer
Abstract:
Research findings presented within this paper form part of a larger scale collaboration between academics at Manchester Metropolitan University and a technology company. The overarching aims of this project focus on developing a series of game-based resources to promote the teaching of aspects of mathematics through a second language (L2) in primary schools. This study explores the potential of game-based learning (GBL) as a dynamic way to engage and motivate learners, making learning fun and purposeful. The research examines the capacity of GBL resources to provide a meaningful and purposeful context for CLIL. GBL is a powerful learning environment and acts as an effective vehicle to promote the learning of mathematics through an L2. The fun element of GBL can minimise stress and anxiety associated with mathematics and L2 learning that can create barriers. GBL provides one of the few safe domains where it is acceptable for learners to fail. Games can provide a life-enhancing experience for learners, revolutionizing the routinized ways of learning through fusing learning and play. This study argues that playing games requires learners to think creatively to solve mathematical problems, using the L2 in order to progress, which can be associated with the development of higher-order thinking skills and independent learning. GBL requires learners to engage appropriate cognitive processes with increased speed of processing, sensitivity to environmental inputs, or flexibility in allocating cognitive and perceptual resources. At surface level, GBL resources provide opportunities for learners to learn to do things. Games that fuse subject content and appropriate learning objectives have the potential to make learning academic subjects more learner-centered, promote learner autonomy, easier, more enjoyable, more stimulating and engaging and therefore, more effective. Data includes observations of the children playing the games and follow up group interviews. Given that learning as a cognitive event cannot be directly observed or measured. A Cognitive Discourse Functions (CDF) construct was used to frame the research, to map the development of learners’ conceptual understanding in an L2 context and as a framework to observe the discursive interactions that occur learner to learner and between learner and teacher. Cognitively, the children were required to engage with mathematical content, concepts and language to make decisions quickly, to engage with the gameplay to reason, solve and overcome problems and learn through experimentation. The visual elements of the games supported the learning of new concepts. Children recognised the value of the games to consolidate their mathematical thinking and develop their understanding of new ideas. The games afforded them time to think and reflect. The teachers affirmed that the games provided meaningful opportunities for the learners to practise the language. The findings of this research support the view that using the game-based resources supported children’s grasp of mathematical ideas and their confidence and ability to use the L2. Engaging with the content and language through the games led to deeper learning.Keywords: CLIL, gaming, language, mathematics
Procedia PDF Downloads 142559 Adaptation Measures as a Response to Climate Change Impacts and Associated Financial Implications for Construction Businesses by the Application of a Mixed Methods Approach
Authors: Luisa Kynast
Abstract:
It is obvious that buildings and infrastructure are highly impacted by climate change (CC). Both, design and material of buildings need to be resilient to weather events in order to shelter humans, animals, or goods. As well as buildings and infrastructure are exposed to weather events, the construction process itself is generally carried out outdoors without being protected from extreme temperatures, heavy rain, or storms. The production process is restricted by technical limitations for processing materials with machines and physical limitations due to human beings (“outdoor-worker”). In future due to CC, average weather patterns are expected to change as well as extreme weather events are expected to occur more frequently and more intense and therefore have a greater impact on production processes and on the construction businesses itself. This research aims to examine this impact by analyzing an association between responses to CC and financial performance of businesses within the construction industry. After having embedded the above depicted field of research into the resource dependency theory, a literature review was conducted to expound the state of research concerning a contingent relation between climate change adaptation measures (CCAM) and corporate financial performance for construction businesses. The examined studies prove that this field is rarely investigated, especially for construction businesses. Therefore, reports of the Carbon Disclosure Project (CDP) were analyzed by applying content analysis using the software tool MAXQDA. 58 construction companies – located worldwide – could be examined. To proceed even more systematically a coding scheme analogous to findings in literature was adopted. Out of qualitative analysis, data was quantified and a regression analysis containing corporate financial data was conducted. The results gained stress adaptation measures as a response to CC as a crucial proxy to handle climate change impacts (CCI) by mitigating risks and exploiting opportunities. In CDP reports the majority of answers stated increasing costs/expenses as a result of implemented measures. A link to sales/revenue was rarely drawn. Though, CCAM were connected to increasing sales/revenues. Nevertheless, this presumption is supported by the results of the regression analysis where a positive effect of implemented CCAM on construction businesses´ financial performance in the short-run was ascertained. These findings do refer to appropriate responses in terms of the implemented number of CCAM. Anyhow, still businesses show a reluctant attitude for implementing CCAM, which was confirmed by findings in literature as well as by findings in CDP reports. Businesses mainly associate CCAM with costs and expenses rather than with an effect on their corporate financial performance. Mostly companies underrate the effect of CCI and overrate the costs and expenditures for the implementation of CCAM and completely neglect the pay-off. Therefore, this research shall create a basis for bringing CC to the (financial) attention of corporate decision-makers, especially within the construction industry.Keywords: climate change adaptation measures, construction businesses, financial implication, resource dependency theory
Procedia PDF Downloads 143558 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 230557 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 203556 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 60555 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation
Authors: W. Meron Mebrahtu, R. Absi
Abstract:
Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.Keywords: accuracy, eddy viscosity, sewers, velocity profile
Procedia PDF Downloads 112554 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Eva Laryea, Clement Yeboah Authors
Abstract:
A pretest-posttest within subjects, experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising, as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers, and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, experimental design, achievement, statistics-related anxiety
Procedia PDF Downloads 58553 Vitamin D Levels of Patients with Rheumatoid Arthritis in Kosova
Authors: Mjellma Rexhepi, Blerta Rexhepi Kelmendi, Blana Krasniqi, Shaip Krasniqi
Abstract:
Rheumatoid arthritis is a chronic disease that causes inflammation of the joints which can be so severe that can cause not only deformities but also impairment of function that limits movement. This also contributes to the pain that accompanies this disease. This remains a problematic and challenging disease of modern medicine because treatment is still symptomatic. The main purpose of drug treatment is to reduce the activity of the disease, achieve remission, avoid disability and death. The etiology of the disease is idiopathic, but can also be linked to genetic, nongenetic factors such as hormonal, environmental or infectious. Current scientific evidence shows that vitamin D plays an important role in immune regulation mechanisms. Lack of this vitamin has been linked to loss of immune tolerance and the appearance of autoimmune processes, including rheumatoid arthritis. The purpose of the work was to define Vitamin D in patients hospitalized with rheumatoid arthritis in University Clinical Center of Kosova, as a basis of their connection with lifestyle and physical inactivity. The sample for the work was selected from patients with criteria met for rheumatoid arthritis who were hospitalized at the tertiary level of health care in Kosova. During the work have been investigated 100 consecutive patients fulfilling diagnostic criteria for rheumatoid arthritis, whereas in addition to the general characteristics are also determined the values of vitamin D at the beginning of hospitalization. The average age of the sample analyzed was 50.9±5.7 years old, with an average duration of rheumatoid arthritis disease 7.8±3.4 years. At the beginning of hospitalization, before treatment was initiated, the average value of vitamin D was 15.86±3.43, which according to current reference values is classified into the category of insufficient values. Correlating the duration of the disease, from the time of diagnosis to the day of hospitalization, on one side and the level of vitamin D on the other side, the negative correlation of a lower degree derived (r =-0.1). Physical activity affects the concentration of vitamin D in the blood through increased metabolism of fat and the release of vitamin D and its metabolites from adipose tissue. To now it is evident that physical activity is also accompanied by higher levels of vitamin D. In patients with rheumatoid arthritis, vitamin D levels were low compared to normal. Future works should be oriented toward investigating in detail the bone structure, quality of life and pain in patients with rheumatoid arthritis. More detailed scientific projects, with larger numbers of participants, should be designed for the future to clarify more possible mechanisms as factors related to this phenomenon such as inactivity, lifestyle and the duration of the disease, as well as the importance of keeping vitamin D values at normal limits.Keywords: hospitalization, lifestyle, rheumatoid arthritis, vitamin D
Procedia PDF Downloads 15552 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia
Authors: Jun Won Kim
Abstract:
Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility
Procedia PDF Downloads 143551 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties
Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi
Abstract:
Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling
Procedia PDF Downloads 68550 Using Structural Equation Modeling to Measure the Impact of Young Adult-Dog Personality Characteristics on Dog Walking Behaviours during the COVID-19 Pandemic
Authors: Renata Roma, Christine Tardif-Williams
Abstract:
Engaging in daily walks with a dog (f.e. Canis lupus familiaris) during the COVID-19 pandemic may be linked to feelings of greater social-connectedness and global self-worth, and lower stress after controlling for mental health issues, lack of physical contact with others, and other stressors associated with the current pandemic. Therefore, maintaining a routine of dog walking might mitigate the effects of stressors experienced during the pandemic and promote well-being. However, many dog owners do not walk their dogs for many reasons, which are related to the owner’s and the dog’s personalities. Note that the consistency of certain personality characteristics among dogs demonstrates that it is possible to accurately measure different dimensions of personality in both dogs and their human counterparts. In addition, behavioural ratings (e.g., the dog personality questionnaire - DPQ) are reliable tools to assess the dog’s personality. Clarifying the relevance of personality factors in the context of young adult-dog relationships can shed light on interactional aspects that can potentially foster protective behaviours and promote well-being among young adults during the pandemic. This study examines if and how nine combinations of dog- and young adult-related personality characteristics (e.g., neuroticism-fearfulness) can amplify the influence of personality factors in the context of dog walking during the COVID-19 pandemic. Responses to an online large-scale survey among 440 (389 females; 47 males; 4 nonbinaries, Mage=20.7, SD= 2.13 range=17-25) young adults living with a dog in Canada were analyzed using structural equation modeling (SEM). As extraversion, conscientiousness, and neuroticism, measured through the five-factor model (FFM) inventory, are related to maintaining a routine of physical activities, these dimensions were selected for this analysis. Following an approach successfully adopted in the field of dog-human interactions, the FFM was used as the organizing framework to measure and compare the human’s and the dog’s personality in the context of dog walking. The dog-related personality dimensions activity/excitability, responsiveness to training, and fearful were correlated dimensions captured through DPQ and were added to the analysis. Two questions were used to assess dog walking. The actor-partner interdependence model (APIM) was used to check if the young adult’s responses about the dog were biased; no significant bias was observed. Activity/excitability and responsiveness to training in dogs were greatly associated with dog walking. For young adults, high scores in conscientiousness and extraversion predicted more walks with the dog. Conversely, higher scores in neuroticism predicted less engagement in dog walking. For participants high in conscientiousness, the dog’s responsiveness to training (standardized=0.14, p=0.02) and the dog’s activity/excitability (standardized=0.15, p=0.00) levels moderated dog walking behaviours by promoting more daily walks. These results suggest that some combinations in young adult and dog personality characteristics are associated with greater synergy in the young adult-dog dyad that might amplify the impact of personality factors on young adults’ dog-walking routines. These results can inform programs designed to promote the mental and physical health of young adults during the Covid-19 pandemic by highlighting the impact of synergy and reciprocity in personality characteristics between young adults and dogs.Keywords: Covid-19 pandemic, dog walking, personality, structural equation modeling, well-being
Procedia PDF Downloads 115549 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare
Authors: Piret Pernik
Abstract:
Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts
Procedia PDF Downloads 102548 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization
Authors: Aitor Bilbao, Dragos Axinte, John Billingham
Abstract:
The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation
Procedia PDF Downloads 275547 Study of Secondary Metabolites of Sargassum Algae: Anticorrosive and Antibacterial Activities
Authors: Prescilla Lambert, Christophe Roos, Mounim Lebrini
Abstract:
For several years, the Caribbean islands and West Africa have had to deal with the massive arrival of the brown seaweed Sargassum. Overall, this macroalgae, which constitutes a habitat for a great diversity of marine organisms, is also an additional stress factor for the marine environment (e.g., coral reefs). In addition, the accumulation followed by the significant decomposition of the Sargassum spp. biomass on the coast leads to the release of toxic gases (H₂S and NH₃), which calls into question the functioning of the economic, health and tourist life of the island and the other interested territories. Originally, these algae are formed by the eutrophication of the oceans accentuated by global warming. Unfortunately, scientists predict a significant recurrence of these Sargassum strandings for years to come. It is therefore more than necessary to find solutions by putting in place a sustainable management plan for this phenomenon. Martinique, a small island in the Caribbean arc, is one of the many areas impacted by Sargassum seaweed strandings. Since 2011, there has been a constant increase in the degradation of the materials present in this region, largely due to toxic/corrosive gases released by the algae decomposition. In order to protect the structures and the vulnerable building materials while limiting the use of synthetic/petroleum based molecules as much as possible, research is being conducted on molecules of natural origin. Thus, thanks to the chemical composition, which comprise molecules with interesting properties, algae such as Sargassum could potentially help to solve many issues. Therefore, this study focuses on the green extraction and characterization of molecules from the species Sargassum fluitans and Sargassum natans present in Martinique. The secondary metabolites found in these extracts showed variability in yield rates due to local climatic conditions. The tests carried out shed light on the anticorrosive and antibacterial potential of the algae. These extracts can thus be described as natural inhibitors. The effect of variation in inhibitor concentrations was tested in electrochemistry using electrochemical impedance spectroscopy and polarization curves. The analysis of electrochemical results obtained by direct immersion in the extracts and self-assembled molecular layers (SAMs) for Sargassum fluitans III, Sargassum natans I and VIII species was conclusive in acid and alkaline environments. The excellent results obtained reveal an inhibitory efficacy of 88% at 50mg/L for the crude extract of Sargassum fluitans III and efficacies greater than 97% for the chemical families of Sargassum fluitans III. Similarly, microbiological tests also suggest a bactericidal character. Results for Sargassum fluitans III crude extract show a minimum inhibitory concentration (MIC) of 0.005 mg/mL on Gram-negative bacteria and a MIC greater than 0.6 mg/mL on Gram-positive bacteria. These results make it possible to consider the management of local and international issues while valuing a biomass rich in biodegradable molecules. The next step in this study will therefore be the evaluation of the toxicity of Sargassum spp..Keywords: Sargassum, secondary metabolites, anticorrosive, antibacterial, natural inhibitors
Procedia PDF Downloads 72546 Computational Insights Into Allosteric Regulation of Lyn Protein Kinase: Structural Dynamics and Impacts of Cancer-Related Mutations
Authors: Mina Rabipour, Elena Pallaske, Floyd Hassenrück, Rocio Rebollido-Rios
Abstract:
Protein tyrosine kinases, including Lyn kinase of the Src family kinases (SFK), regulate cell proliferation, survival, and differentiation. Lyn kinase has been implicated in various cancers, positioning it as a promising therapeutic target. However, the conserved ATP-binding pocket across SFKs makes developing selective inhibitors challenging. This study aims to address this limitation by exploring the potential for allosteric modulation of Lyn kinase, focusing on how its structural dynamics and specific oncogenic mutations impact its conformation and function. To achieve this, we combined homology modeling, molecular dynamics simulations, and data science techniques to conduct microsecond-length simulations. Our approach allowed a detailed investigation into the interplay between Lyn’s catalytic and regulatory domains, identifying key conformational states involved in allosteric regulation. Additionally, we evaluated the structural effects of Dasatinib, a competitive inhibitor, and ATP binding on Lyn active conformation. Notably, our simulations show that cancer-related mutations, specifically I364L/N and E290D/K, shift Lyn toward an inactive conformation, contrasting with the active state of the wild-type protein. This may suggest how these mutations contribute to aberrant signaling in cancer cells. We conducted a dynamical network analysis to assess residue-residue interactions and the impact of mutations on the Lyn intramolecular network. This revealed significant disruptions due to mutations, especially in regions distant from the ATP-binding site. These disruptions suggest potential allosteric sites as therapeutic targets, offering an alternative strategy for Lyn inhibition with higher specificity and fewer off-target effects compared to ATP-competitive inhibitors. Our findings provide insights into Lyn kinase regulation and highlight allosteric sites as avenues for selective drug development. Targeting these sites may modulate Lyn activity in cancer cells, reducing toxicity and improving outcomes. Furthermore, our computational strategy offers a scalable approach for analyzing other SFK members or kinases with similar properties, facilitating the discovery of selective allosteric modulators and contributing to precise cancer therapies.Keywords: lyn tyrosine kinase, mutation analysis, conformational changes, dynamic network analysis, allosteric modulation, targeted inhibition
Procedia PDF Downloads 14545 Medial Temporal Tau Predicts Memory Decline in Cognitively Unimpaired Elderly
Authors: Angela T. H. Kwan, Saman Arfaie, Joseph Therriault, Zahra Azizi, Firoza Z. Lussier, Cecile Tissot, Mira Chamoun, Gleb Bezgin, Stijn Servaes, Jenna Stevenon, Nesrine Rahmouni, Vanessa Pallen, Serge Gauthier, Pedro Rosa-Neto
Abstract:
Alzheimer’s disease (AD) can be detected in living people using in vivo biomarkers of amyloid-β (Aβ) and tau, even in the absence of cognitive impairment during the preclinical phase. [¹⁸F]-MK-6420 is a high affinity positron emission tomography (PET) tracer that quantifies tau neurofibrillary tangles, but its ability to predict cognitive changes associated with early AD symptoms, such as memory decline, is unclear. Here, we assess the prognostic accuracy of baseline [18F]-MK-6420 tau PET for predicting longitudinal memory decline in asymptomatic elderly individuals. In a longitudinal observational study, we evaluated a cohort of cognitively normal elderly participants (n = 111) from the Translational Biomarkers in Aging and Dementia (TRIAD) study (data collected between October 2017 and July 2020, with a follow-up period of 12 months). All participants underwent tau PET with [¹⁸F]-MK-6420 and Aβ PET with [¹⁸F]-AZD-4694. The exclusion criteria included the presence of head trauma, stroke, or other neurological disorders. There were 111 eligible participants who were chosen based on the availability of Aβ PET, tau PET, magnetic resonance imaging (MRI), and APOEε4 genotyping. Among these participants, the mean (SD) age was 70.1 (8.6) years; 20 (18%) were tau PET positive, and 71 of 111 (63.9%) were women. A significant association between baseline Braak I-II [¹⁸F]-MK-6240 SUVR positivity and change in composite memory score was observed at the 12-month follow-up, after correcting for age, sex, and years of education (Logical Memory and RAVLT, standardized beta = -0.52 (-0.82-0.21), p < 0.001, for dichotomized tau PET and -1.22 (-1.84-(-0.61)), p < 0.0001, for continuous tau PET). Moderate cognitive decline was observed for A+T+ over the follow-up period, whereas no significant change was observed for A-T+, A+T-, and A-T-, though it should be noted that the A-T+ group was small.Our results indicate that baseline tau neurofibrillary tangle pathology is associated with longitudinal changes in memory function, supporting the use of [¹⁸F]-MK-6420 PET to predict the likelihood of asymptomatic elderly individuals experiencing future memory decline. Overall, [¹⁸F]-MK-6420 PET is a promising tool for predicting memory decline in older adults without cognitive impairment at baseline. This is of critical relevance as the field is shifting towards a biological model of AD defined by the aggregation of pathologic tau. Therefore, early detection of tau pathology using [¹⁸F]-MK-6420 PET provides us with the hope that living patients with AD may be diagnosed during the preclinical phase before it is too late.Keywords: alzheimer’s disease, braak I-II, in vivo biomarkers, memory, PET, tau
Procedia PDF Downloads 76544 Delineation of Different Geological Interfaces Beneath the Bengal Basin: Spectrum Analysis and 2D Density Modeling of Gravity Data
Authors: Md. Afroz Ansari
Abstract:
The Bengal basin is a spectacular example of a peripheral foreland basin formed by the convergence of the Indian plate beneath the Eurasian and Burmese plates. The basin is embraced on three sides; north, west and east by different fault-controlled tectonic features whereas released in the south where the rivers are drained into the Bay of Bengal. The Bengal basin in the eastern part of the Indian subcontinent constitutes the largest fluvio-deltaic to shallow marine sedimentary basin in the world today. This continental basin coupled with the offshore Bengal Fan under the Bay of Bengal forms the biggest sediment dispersal system. The continental basin is continuously receiving the sediments by the two major rivers Ganga and Brahmaputra (known as Jamuna in Bengal), and Meghna (emerging from the point of conflux of the Ganga and Brahmaputra) and large number of rain-fed, small tributaries originating from the eastern Indian Shield. The drained sediments are ultimately delivered into the Bengal fan. The significance of the present study is to delineate the variations in thicknesses of the sediments, different crustal structures, and the mantle lithosphere throughout the onshore-offshore Bengal basin. In the present study, the different crustal/geological units and the shallower mantle lithosphere were delineated by analyzing the Bouguer Gravity Anomaly (BGA) data along two long traverses South-North (running from Bengal fan cutting across the transition offshore-onshore of the Bengal basin and intersecting the Main Frontal Thrust of India-Himalaya collision zone in Sikkim-Bhutan Himalaya) and West-East (running from the Peninsular Indian Shield across the Bengal basin to the Chittagong–Tripura Fold Belt). The BGA map was derived from the analysis of topex data after incorporating Bouguer correction and all terrain corrections. The anomaly map was compared with the available ground gravity data in the western Bengal basin and the sub-continents of India for consistency of the data used. Initially, the anisotropy associated with the thicknesses of the different crustal units, crustal interfaces and moho boundary was estimated through spectral analysis of the gravity data with varying window size over the study area. The 2D density sections along the traverses were finalized after a number of iterations with the acceptable root mean square (RMS) errors. The estimated thicknesses of the different crustal units and dips of the Moho boundary along both the profiles are consistent with the earlier results. Further the results were encouraged by examining the earthquake database and focal mechanism solutions for better understanding the geodynamics. The earthquake data were taken from the catalogue of US Geological Survey, and the focal mechanism solutions were compiled from the Harvard Centroid Moment Tensor Catalogue. The concentrations of seismic events at different depth levels are not uncommon. The occurrences of earthquakes may be due to stress accumulation as a result of resistance from three sides.Keywords: anisotropy, interfaces, seismicity, spectrum analysis
Procedia PDF Downloads 273543 Bis-Azlactone Based Biodegradable Poly(Ester Amide)s: Design, Synthesis and Study
Authors: Kobauri Sophio, Kantaria Tengiz, Tugushi David, Puiggali Jordi, Katsarava Ramaz
Abstract:
Biodegradable biomaterials (BB) are of high interest for numerous applications in modern medicine as resorbable surgical materials and drug delivery systems. This kind of materials can be cleared from the body after the fulfillment of their function that excludes a surgical intervention for their removal. One of the most promising BBare amino acids based biodegradable poly(ester amide)s (PEAs) which are composed of naturally occurring (α-amino acids) and non-toxic building blocks such as fatty diols and dicarboxylic acids. Key bis-nucleophilic monomers for synthesizing the PEAs are diamine-diesters-di-p-toluenesulfonic acid salts of bis-(α-amino acid)-alkylenediesters (TAADs) which form the PEAs after step-growth polymerization (polycondensation) with bis-electrophilic counter-partners - activated diesters of dicarboxylic acids. The PEAs combine all advantages of the 'parent polymers' – polyesters (PEs) and polyamides (PAs): Ability of biodegradation (PEs), a high affinity with tissues and a wide range of desired mechanical properties (PAs). The scopes of applications of thePEAs can substantially be expanded by their functionalization, e.g. through the incorporation of hydrophobic fragments into the polymeric backbones. Hydrophobically modified PEAs can form non-covalent adducts with various compounds that make them attractive as drug carriers. For hydrophobic modification of the PEAs, we selected so-called 'Azlactone Method' based on the application of p-phenylene-bis-oxazolinons (bis-azlactones, BALs) as active bis-electrophilic monomers in step-growth polymerization with TAADs. Interaction of BALs with TAADs resulted in the PEAs with low MWs (Mw2,800-19,600 Da) and poor material properties. The high-molecular-weight PEAs (Mw up to 100,000) with desirable material properties were synthesized after replacement of a part of BALs with activated diester - di-p-nitrophenylsebacate, or a part of TAAD with alkylenediamine – 1,6-hexamethylenediamine. The new hydrophobically modified PEAs were characterized by FTIR, NMR, GPC, and DSC. It was shown that after the hydrophobic modification the PEAs retain the biodegradability (in vitro study catalyzed by α-chymptrypsin and lipase), and are of interest for constructing resorbable surgical and pharmaceutical devices including drug delivering containers such as microspheres. The new PEAs are insoluble in hydrophobic organic solvents such as chloroform or dichloromethane (swell only) that allowed elaborating a new technology of fabricating microspheres.Keywords: amino acids, biodegradable polymers, bis-azlactones, microspheres
Procedia PDF Downloads 175542 Analysis of Correlation Between Manufacturing Parameters and Mechanical Strength Followed by Uncertainty Propagation of Geometric Defects in Lattice Structures
Authors: Chetra Mang, Ahmadali Tahmasebimoradi, Xavier Lorang
Abstract:
Lattice structures are widely used in various applications, especially in aeronautic, aerospace, and medical applications because of their high performance properties. Thanks to advancement of the additive manufacturing technology, the lattice structures can be manufactured by different methods such as laser beam melting technology. However, the presence of geometric defects in the lattice structures is inevitable due to the manufacturing process. The geometric defects may have high impact on the mechanical strength of the structures. This work analyzes the correlation between the manufacturing parameters and the mechanical strengths of the lattice structures. To do that, two types of the lattice structures; body-centered cubic with z-struts (BCCZ) structures made of Inconel718, and body-centered cubic (BCC) structures made of Scalmalloy, are manufactured by laser melting beam machine using Taguchi design of experiment. Each structure is placed on the substrate with a specific position and orientation regarding the roller direction of deposed metal powder. The position and orientation are considered as the manufacturing parameters. The geometric defects of each beam in the lattice are characterized and used to build the geometric model in order to perform simulations. Then, the mechanical strengths are defined by the homogeneous response as Young's modulus and yield strength. The distribution of mechanical strengths is observed as a function of manufacturing parameters. The mechanical response of the BCCZ structure is stretch-dominated, i.e., the mechanical strengths are directly dependent on the strengths of the vertical beams. As the geometric defects of vertical beams are slightly changed based on their position/orientation on the manufacturing substrate, the mechanical strengths are less dispersed. The manufacturing parameters are less influenced on the mechanical strengths of the structure BCCZ. The mechanical response of the BCC structure is bending-dominated. The geometric defects of inclined beam are highly dispersed within a structure and also based on their position/orientation on the manufacturing substrate. For different position/orientation on the substrate, the mechanical responses are highly dispersed as well. This shows that the mechanical strengths are directly impacted by manufacturing parameters. In addition, this work is carried out to study the uncertainty propagation of the geometric defects on the mechanical strength of the BCC lattice structure made of Scalmalloy. To do that, we observe the distribution of mechanical strengths of the lattice according to the distribution of the geometric defects. A probability density law is determined based on a statistical hypothesis corresponding to the geometric defects of the inclined beams. The samples of inclined beams are then randomly drawn from the density law to build the lattice structure samples. The lattice samples are then used for simulation to characterize the mechanical strengths. The results reveal that the distribution of mechanical strengths of the structures with the same manufacturing parameters is less dispersed than one of the structures with different manufacturing parameters. Nevertheless, the dispersion of mechanical strengths due to the structures with the same manufacturing parameters are unneglectable.Keywords: geometric defects, lattice structure, mechanical strength, uncertainty propagation
Procedia PDF Downloads 123541 Cognitive Performance and Everyday Functionality in Healthy Greek Seniors
Authors: George Pavlidis, Ana Vivas
Abstract:
The demographic change into an aging population has stimulated the examination of seniors’ mental health and ability to live independently. The corresponding literature depicts the relation between cognitive decline and everyday functionality with aging, focusing largely in individuals that are reaching or have bridged the threshold of various forms of neuropathology and disability. In this context, recent meta-analysis depicts a moderate relation between cognitive performance and everyday functionality in AD sufferers. However, there has not been an analogous effort for the examination of this relation in the healthy spectrum of aging (i.e, in samples that are not challenged from a neurodegenerative disease). There is a consensus that the assessment tools designed to detect neuropathology with those that assess cognitive performance in healthy adults are distinct, thus their universal use in cognitively challenged and in healthy adults is not always valid. The same accounts for the assessment of everyday functionality. In addition, it is argued that everyday functionality should be examined with cultural adjusted assessment tools, since many vital everyday tasks are heterotypical among distinct cultures. Therefore, this study was set out to examine the relation between cognitive performance and everyday functionality a) in the healthy spectrum of aging and b) by adjusting the everyday functionality tools EPT and OTDL-R in the Greek cultural context. In Greece, 107 cognitively healthy seniors ( Mage = 62.24) completed a battery of neuropsychological tests and everyday functionality tests. Both were carefully chosen to be sensitive in fluctuations of performance in the healthy spectrum of cognitive performance and everyday functionality. The everyday functionality assessment tools were modified to reflect the local cultural context (i.e., EPT-G and OTDL-G). The results depicted that performance in all everyday functionality measures decline with age (.197 < r > .509). Statistically significant correlations emerged between cognitive performance and everyday functionality assessments that range from r =0.202 to r=0.510. A series of independent regression analysis including the scores of cognitive assessments has yield statistical significant models that explained 20.9 < AR2 > 32.4 of the variance in everyday functionality scored indexes. All everyday functionality measures were independently predicted by the TMT B-A index, and indicator of executive function. Stepwise regression analyses depicted that TMT B-A and age were statistically significant independent predictors of EPT-G and OTDL-G. It was concluded that everyday functionality is declining with age and that cognitive performance and everyday functional may be related in the healthy spectrum of aging. Age seems not to be the sole contributing factor in everyday functionality decline, rather executive control as well. Moreover, it was concluded that the EPT-G and OTDL-G are valuable tools to assess everyday functionality in Greek seniors that are not cognitively challenged, especially for research purposes. Future research should examine the contributing factors of a better cognitive vitality especially in executive control, as vital for the maintenance of independent living capacity with aging.Keywords: cognition, everyday functionality, aging, cognitive decline, healthy aging, Greece
Procedia PDF Downloads 523540 Mechanical Properties of Poly(Propylene)-Based Graphene Nanocomposites
Authors: Luiza Melo De Lima, Tito Trindade, Jose M. Oliveira
Abstract:
The development of thermoplastic-based graphene nanocomposites has been of great interest not only to the scientific community but also to different industrial sectors. Due to the possible improvement of performance and weight reduction, thermoplastic nanocomposites are a great promise as a new class of materials. These nanocomposites are of relevance for the automotive industry, namely because the emission limits of CO2 emissions imposed by the European Commission (EC) regulations can be fulfilled without compromising the car’s performance but by reducing its weight. Thermoplastic polymers have some advantages over thermosetting polymers such as higher productivity, lower density, and recyclability. In the automotive industry, for example, poly(propylene) (PP) is a common thermoplastic polymer, which represents more than half of the polymeric raw material used in automotive parts. Graphene-based materials (GBM) are potential nanofillers that can improve the properties of polymer matrices at very low loading. In comparison to other composites, such as fiber-based composites, weight reduction can positively affect their processing and future applications. However, the properties and performance of GBM/polymer nanocomposites depend on the type of GBM and polymer matrix, the degree of dispersion, and especially the type of interactions between the fillers and the polymer matrix. In order to take advantage of the superior mechanical strength of GBM, strong interfacial strength between GBM and the polymer matrix is required for efficient stress transfer from GBM to the polymer. Thus, chemical compatibilizers and physicochemical modifications have been reported as important tools during the processing of these nanocomposites. In this study, PP-based nanocomposites were obtained by a simple melt blending technique, using a Brabender type mixer machine. Graphene nanoplatelets (GnPs) were applied as structural reinforcement. Two compatibilizers were used to improve the interaction between PP matrix and GnPs: PP graft maleic anhydride (PPgMA) and PPgMA modified with tertiary amine alcohol (PPgDM). The samples for tensile and Charpy impact tests were obtained by injection molding. The results suggested the GnPs presence can increase the mechanical strength of the polymer. However, it was verified that the GnPs presence can promote a decrease of impact resistance, turning the nanocomposites more fragile than neat PP. The compatibilizers’ incorporation increases the impact resistance, suggesting that the compatibilizers can enhance the adhesion between PP and GnPs. Compared to neat PP, Young’s modulus of non-compatibilized nanocomposite increase demonstrated that GnPs incorporation can promote a stiffness improvement of the polymer. This trend can be related to the several physical crosslinking points between the PP matrix and the GnPs. Furthermore, the decrease of strain at a yield of PP/GnPs, together with the enhancement of Young’s modulus, confirms that the GnPs incorporation led to an increase in stiffness but to a decrease in toughness. Moreover, the results demonstrated that incorporation of compatibilizers did not affect Young’s modulus and strain at yield results compared to non-compatibilized nanocomposite. The incorporation of these compatibilizers showed an improvement of nanocomposites’ mechanical properties compared both to those the non-compatibilized nanocomposite and to a PP sample used as reference.Keywords: graphene nanoplatelets, mechanical properties, melt blending processing, poly(propylene)-based nanocomposites
Procedia PDF Downloads 187539 Criticality of Adiabatic Length for a Single Branch Pulsating Heat Pipe
Authors: Utsav Bhardwaj, Shyama Prasad Das
Abstract:
To meet the extensive requirements of thermal management of the circuit card assemblies (CCAs), satellites, PCBs, microprocessors, any other electronic circuitry, pulsating heat pipes (PHPs) have emerged in the recent past as one of the best solutions technically. But industrial application of PHPs is still unexplored up to a large extent due to their poor reliability. There are several systems as well as operational parameters which not only affect the performance of an operating PHP, but also decide whether the PHP can operate sustainably or not. Functioning may completely be halted for some particular combinations of the values of system and operational parameters. Among the system parameters, adiabatic length is one of the important ones. In the present work, a simplest single branch PHP system with an adiabatic section has been considered. It is assumed to have only one vapour bubble and one liquid plug. First, the system has been mathematically modeled using film evaporation/condensation model, followed by the steps of recognition of equilibrium zone, non-dimensionalization and linearization. Then proceeding with a periodical solution of the linearized and reduced differential equations, stability analysis has been performed. Slow and fast variables have been identified, and averaging approach has been used for the slow ones. Ultimately, temporal evolution of the PHP is predicted by numerically solving the averaged equations, to know whether the oscillations are likely to sustain/decay temporally. Stability threshold has also been determined in terms of some non-dimensional numbers formed by different groupings of system and operational parameters. A combined analytical and numerical approach has been used, and it has been found that for each combination of all other parameters, there exists a maximum length of the adiabatic section beyond which the PHP cannot function at all. This length has been called as “Critical Adiabatic Length (L_ac)”. For adiabatic lengths greater than “L_ac”, oscillations are found to be always decaying sooner or later. Dependence of “L_ac” on some other parameters has also been checked and correlated at certain evaporator & condenser section temperatures. “L_ac” has been found to be linearly increasing with increase in evaporator section length (L_e), whereas the condenser section length (L_c) has been found to have almost no effect on it upto a certain limit. But at considerably large condenser section lengths, “L_ac” is expected to decrease with increase in “L_c” due to increased wall friction. Rise in static pressure (p_r) exerted by the working fluid reservoir makes “L_ac” rise exponentially whereas it increases cubically with increase in the inner diameter (d) of PHP. Physics of all such variations has been given a good insight too. Thus, a methodology for quantification of the critical adiabatic length for any possible set of all other parameters of PHP has been established.Keywords: critical adiabatic length, evaporation/condensation, pulsating heat pipe (PHP), thermal management
Procedia PDF Downloads 226538 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model
Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi
Abstract:
Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.Keywords: flight control clearance, LFR, stability analysis, robustness analysis
Procedia PDF Downloads 352537 Mechanism Design and Dynamic Analysis of Active Independent Front Steering System
Authors: Cheng-Chi Yu, Yu-Shiue Wang, Kei-Lin Kuo
Abstract:
Active Independent Front Steering system is a steering system which can according to vehicle driving situation adjusts the relation of steering angle between inner wheel and outer wheel. In low-speed cornering, AIFS sets the steering angles of inner and outer wheel into Ackerman steering geometry to make vehicle has less cornering radius. Besides, AIFS changes the steering geometry to parallel or even anti-Ackerman steering geometry to keep vehicle stability in high-speed cornering. Therefore, based on the analysis of the vehicle steering behavior from different steering geometries, this study develops a new screw type of active independent front steering system to make vehicles best cornering performance at any speeds. The screw type of active independent front steering system keeps the pinion and separates the rack into main rack and second rack. Two racks connect by a screw. Extra screw rotated motion powered by assistant motor through coupler makes second rack move relative to main rack, which can adjust both steering ratio and steering geometry. First of all, this study distinguishes the steering geometry by using Ackerman percentage and utilizes the software of ADAMS/Car to construct diverse steering geometry models. The different steering geometries are compared at low-speed and high-speed cornering, and then control strategies of the active independent front steering systems could be formulated. Secondly, this study applies closed loop equation to analyze tire steering angles and carries out optimization calculations to make the steering geometry from traditional rack and pinion steering system near to Ackerman steering geometry. Steering characteristics of the optimum steering mechanism and motion characteristics of vehicle installed the steering mechanism are verified by ADAMS/Car models of front suspension and full vehicle respectively. By adding dual auxiliary rack and dual motor to the optimum steering mechanism, the active independent front steering system could be developed to achieve the functions of variable steering ratio and variable steering geometry. At last, this study uses ADAMS/Car and Matlab/Simulink to co-simulate the cornering motion of vehicles confirms the vehicle installed the Active Independent Front Steering (AIFS) system has better handling performance than that with Active Independent Steering (AFS) system or with Electric Power Steering (EPS) system. At low-speed cornering, the vehicles with AIFS system and with AFS system have better maneuverability, less cornering radius, than the traditional vehicle with EPS system because that AIFS and AFS systems both provide function of variable steering ratio. However, there is a slight penalty in the motor(s) power consumption. In addition, because of the capability of variable steering geometry, the vehicle with AIFS system has better high-speed cornering stability, trajectory keeping, and even less motor(s) power consumption than that with EPS system and also with AFS system.Keywords: active front steering system, active independent front steering system, steering geometry, steering ratio
Procedia PDF Downloads 189536 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 169535 Using the Micro Computed Tomography to Study the Corrosion Behavior of Magnesium Alloy at Different pH Values
Authors: Chia-Jung Chang, Sheng-Che Chen, Ming-Long Yeh, Chih-Wei Wang, Chih-Han Chang
Abstract:
Introduction and Motivation: In recent years, magnesium alloy is used to be a kind of medical biodegradable materials. Magnesium is an essential element in the body and is efficiently excreted by the kidneys. Furthermore, the mechanical properties of magnesium alloy is closest to human bone. However, in some cases magnesium alloy corrodes so quickly that it would release hydrogen on surface of implant. The other product is hydroxide ion, it can significantly increase the local pH value. The above situations may have adverse effects on local cell functions. On the other hand, nowadays magnesium alloy corrode too fast to maintain the function of implant until the healing of tissue. Therefore, much recent research about magnesium alloy has focused on controlling the corrosion rate. The in vitro corrosion behavior of magnesium alloys is affected by many factors, and pH value is one of factors. In this study, we will study on the influence of pH value on the corrosion behavior of magnesium alloy by the Micro-CT (micro computed tomography) and other instruments.Material and methods: In the first step, we make some guiding plates for specimens of magnesium alloy AZ91 by Rapid Prototyping. The guiding plates are able to be a standard for the degradation of specimen, so that we can use it to make sure the position of specimens in the CT image. We can also simplify the conditions of degradation by the guiding plates.In the next step, we prepare the solution with different pH value. And then we put the specimens into the solution to start the corrosion test. The CT image, surface photographs and weigh are measured on every twelve hours. Results: In the primary results of the test, we make sure that CT image can be a way to quantify the corrosion behavior of magnesium alloy. Moreover we can observe the phenomenon that corrosion always start from some erosion point. It’s possibly based on some defect like dislocations and the voids with high strain energy in the materials. We will deal with the raw data into Mass Loss (ML) and corrosion rate by CT image, surface photographs and weigh in the near future. Having a simple prediction, the pH value and degradation rate will be negatively correlated. And we want to find out the equation of the pH value and corrosion rate. We also have a simple test to simulate the change of the pH value in the local region. In this test the pH value will rise to 10 in a short time. Conclusion: As a biodegradable implant for the area with stagnating body fluid flow in the human body, magnesium alloy can cause the increase of local pH values and release the hydrogen. Those may damage the human cell. The purpose of this study is finding out the equation of the pH value and corrosion rate. After that we will try to find the ways to overcome the limitations of medical magnesium alloy.Keywords: magnesium alloy, biodegradable materials, corrosion, micro-CT
Procedia PDF Downloads 457534 Association of Copy Number Variation of the CHKB, KLF6, GPC1, and CHRM3 Genes with Growth Traits of Datong Yak (Bos grunniens)
Authors: Habtamu Abera Goshu, Ping Yan
Abstract:
Copy number variation (CNV) is a significant marker of the genetic and phenotypic diversity among individuals that accounts for complex quantitative traits of phenotype and diseases via modulating gene dosage, position effects, alteration of downstream pathways, modification of chromosome structure, and position within the nucleus and disrupting coding regions in the genome. Associating copy number variations (CNVs) with growth and gene expression are a powerful approach for identifying genomic characteristics that contribute to phenotypic and genotypic variation. A previous study using next-generation sequencing illustrated that the choline kinase beta (CHKB), Krüpple-like factor 6 (KLF6), glypican 1(GPC1), and cholinergic receptor muscarinic 3 (CHRM3) genes reside within copy number variable regions (CNVRs) of yak populations that overlap with quantitative trait loci (QTLs) of meat quality and growth. As a result, this research aimed to determine the association of CNVs of the KLF6, CHKB, GPC1, and CHRM3 genes with growth traits in the Datong yak breed. The association between the CNV types of the KLF6, CHKB, GPC1, and CHRM3 genes and the growth traits in the Datong yak breed was determined by one-way analysis of variance (ANOVA) using SPSS software. The CNV types were classified as a loss (a copy number of 0 or 1), gain (a copy number >2), and normal (a copy number of 2) relative to the reference gene, BTF3 in the 387 individuals of Datong yak. These results indicated that the normal CNV types of the CHKB and GPC1 genes were significantly (P<0.05) associated with high body length, height and weight, and chest girth in six-month-old and five-year-old Datong yaks. On the other hand, the loss CNV types of the KLF6 gene is significantly (P<0.05) associated with body weight and length and chest girth at six-month-old and five-year-old Datong yaks. In the contrary, the gain CNV type of the CHRM3 gene is highly (P<0.05) associated with body weight, length, height, and chest girth in six-month-old and five-year-old. This work provides the first observation of the biological role of CNVs of the CHKB, KLF6, GPC1, and CHRM3 genes in the Datong yak breed and might, therefore, provide a novel opportunity to utilize data on CNVs in designing molecular markers for the selection of animal breeding programs for larger populations of various yak breeds. Therefore, we hypothesized that this study provided inclusive information on the application of CNVs of the CHKB, KLF6, GPC1, and CHRM3 genes in growth traits in Datong yaks and its possible function in bovine species.Keywords: Copy number variation, growth traits, yak, genes
Procedia PDF Downloads 172533 The Growth Role of Natural Gas Consumption for Developing Countries
Authors: Tae Young Jin, Jin Soo Kim
Abstract:
Carbon emissions have emerged as global concerns. Intergovernmental Panel of Climate Change (IPCC) have published reports about Green House Gases (GHGs) emissions regularly. United Nations Framework Convention on Climate Change (UNFCCC) have held a conference yearly since 1995. Especially, COP21 held at December 2015 made the Paris agreement which have strong binding force differently from former COP. The Paris agreement was ratified as of 4 November 2016, they finally have legal binding. Participating countries set up their own Intended Nationally Determined Contributions (INDC), and will try to achieve this. Thus, carbon emissions must be reduced. The energy sector is one of most responsible for carbon emissions and fossil fuels particularly are. Thus, this paper attempted to examine the relationship between natural gas consumption and economic growth. To achieve this, we adopted the Cobb-Douglas production function that consists of natural gas consumption, economic growth, capital, and labor using dependent panel analysis. Data were preprocessed with Principal Component Analysis (PCA) to remove cross-sectional dependency which can disturb the panel results. After confirming the existence of time-trended component of each variable, we moved to cointegration test considering cross-sectional dependency and structural breaks to describe more realistic behavior of volatile international indicators. The cointegration test result indicates that there is long-run equilibrium relationship between selected variables. Long-run cointegrating vector and Granger causality test results show that while natural gas consumption can contribute economic growth in the short-run, adversely affect in the long-run. From these results, we made following policy implications. Since natural gas has positive economic effect in only short-run, the policy makers in developing countries must consider the gradual switching of major energy source, from natural gas to sustainable energy source. Second, the technology transfer and financing business suggested by COP must be accelerated. Acknowledgement—This work was supported by the Energy Efficiency & Resources Core Technology Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20152510101880) and by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-205S1A3A2046684).Keywords: developing countries, economic growth, natural gas consumption, panel data analysis
Procedia PDF Downloads 234532 Finite Element Analysis of Hollow Structural Shape (HSS) Steel Brace with Infill Reinforcement under Cyclic Loading
Authors: Chui-Hsin Chen, Yu-Ting Chen
Abstract:
Special concentrically braced frames is one of the seismic load resisting systems, which dissipates seismic energy when bracing members within the frames undergo yielding and buckling while sustaining their axial tension and compression load capacities. Most of the inelastic deformation of a buckling bracing member concentrates in the mid-length region. While experiencing cyclic loading, the region dissipates most of the seismic energy being input into the frame. Such a concentration makes the braces vulnerable to failure modes associated with low-cycle fatigue. In this research, a strategy to improve the cyclic behavior of the conventional steel bracing member is proposed by filling the Hollow Structural Shape (HSS) member with reinforcement. It prevents the local section from concentrating large plastic deformation caused by cyclic loading. The infill helps spread over the plastic hinge region into a wider area hence postpone the initiation of local buckling or even the rupture of the braces. The finite element method is introduced to simulate the complicated bracing member behavior and member-versus-infill interaction under cyclic loading. Fifteen 3-D-element-based models are built by ABAQUS software. The verification of the FEM model is done with unreinforced (UR) HSS bracing members’ cyclic test data and aluminum honeycomb plates’ bending test data. Numerical models include UR and filled HSS bracing members with various compactness ratios based on the specification of AISC-2016 and AISC-1989. The primary variables to be investigated include the relative bending stiffness and the material of the filling reinforcement. The distributions of von Mises stress and equivalent plastic strain (PEEQ) are used as indices to tell the strengths and shortcomings of each model. The result indicates that the change of relative bending stiffness of the infill is much more influential than the change of material in use to increase the energy dissipation capacity. Strengthen the relative bending stiffness of the reinforcement results in additional energy dissipation capacity to the extent of 24% and 46% in model based on AISC-2016 (16-series) and AISC-1989 (89-series), respectively. HSS members with infill show growth in 𝜂Local Buckling, normalized energy cumulated until the happening of local buckling, comparing to UR bracing members. The 89-series infill-reinforced members have more energy dissipation capacity than unreinforced 16-series members by 117% to 166%. The flexural rigidity of infills should be less than 29% and 13% of the member section itself for 16-series and 89-series bracing members accordingly, thereby guaranteeing the spread over of the plastic hinge and the happening of it within the reinforced section. If the parameters are properly configured, the ductility, energy dissipation capacity, and fatigue-life of HSS SCBF bracing members can be improved prominently by the infill-reinforced method.Keywords: special concentrically braced frames, HSS, cyclic loading, infill reinforcement, finite element analysis, PEEQ
Procedia PDF Downloads 93