Search results for: product design and development
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27933

Search results for: product design and development

1413 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence

Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang

Abstract:

Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.

Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics

Procedia PDF Downloads 74
1412 Efficiency of Maritime Simulator Training in Oil Spill Response Competence Development

Authors: Antti Lanki, Justiina Halonen, Juuso Punnonen, Emmi Rantavuo

Abstract:

Marine oil spill response operation requires extensive vessel maneuvering and navigation skills. At-sea oil containment and recovery include both single vessel and multi-vessel operations. Towing long oil containment booms that are several hundreds of meters in length, is a challenge in itself. Boom deployment and towing in multi-vessel configurations is an added challenge that requires precise coordination and control of the vessels. Efficient communication, as a prerequisite for shared situational awareness, is needed in order to execute the response task effectively. To gain and maintain adequate maritime skills, practical training is needed. Field exercises are the most effective way of learning, but especially the related vessel operations are resource-intensive and costly. Field exercises may also be affected by environmental limitations such as high sea-state or other adverse weather conditions. In Finland, the seasonal ice-coverage also limits the training period to summer seasons only. In addition, environmental sensitiveness of the sea area restricts the use of real oil or other target substances. This paper examines, whether maritime simulator training can offer a complementary method to overcome the training challenges related to field exercises. The objective is to assess the efficiency and the learning impact of simulator training, and the specific skills that can be trained most effectively in simulators. This paper provides an overview of learning results from two oil spill response pilot courses, in which maritime navigational bridge simulators were used to train the oil spill response authorities. The simulators were equipped with an oil spill functionality module. The courses were targeted at coastal Fire and Rescue Services responsible for near shore oil spill response in Finland. The competence levels of the participants were surveyed before and after the course in order to measure potential shifts in competencies due to the simulator training. In addition to the quantitative analysis, the efficiency of the simulator training is evaluated qualitatively through feedback from the participants. The results indicate that simulator training is a valid and effective method for developing marine oil spill response competencies that complement traditional field exercises. Simulator training provides a safe environment for assessing various oil containment and recovery tactics. One of the main benefits of the simulator training was found to be the immediate feedback the spill modelling software provides on the oil spill behaviour as a reaction to response measures.

Keywords: maritime training, oil spill response, simulation, vessel manoeuvring

Procedia PDF Downloads 172
1411 The Saudi Arabia 2030 Strategy: Translation Reception and Translator Readiness

Authors: Budur Alsulami

Abstract:

One of the aims of the recently implemented Saudi Arabia Vision 2030 strategy is focused on strengthening education, entertainment, and tourism to attract international visitors to the country. To promote and increase the tourism sector, tourism translation can serve the tourism industry by translating various materials that promote the country’s tourism such as brochures, catalogues, and websites. In order to achieve the goal of enhancing tourism in Saudi Arabia, promotional texts related to tourism and Saudi culture will need to be translated into English and addressed to non-Arabic-speaking potential tourists. This research aims to measure student readiness to be professional translators who can introduce and promote Saudi Arabia to non-Arabic-speaking tourists. The study will also evaluate students' abilities to promote and convey Saudi culture to non-Arabic tourists by translating tourism texts. Translating tourism materials demands considerable effort and specific translation skills to capture tourists' interest and encourage visits. Numerous scholars have explored challenges in translating tourism promotional materials, focusing on translation methods, cultural issues, course design, and necessary knowledge for tourism translation. Based on these insights, experts recommend that translators prioritize audience expectations, cultural appropriateness, and linguistic conventions while revising course syllabi to include practical skills. This research aims to assess students' readiness to become professional translators aligned with Vision 2030 tourism goals. To accomplish this, in the first stage of the project, twenty students from two Saudi Arabian Universities who have completed at least two years of Translation Studies were invited to translate two tourism texts of 300 words each. These tourism texts contain information about famous tourist sights and traditional food in Saudi Arabia and contained cultural terms and heritage information. The students then completed a questionnaire about the challenges of the text and the process of their translation, and then participated in a semi-structured interview. In the second stage of the project, the students’ translations will be evaluated by a qualified National Accreditation Authority of Translators and Interpreters (NAATI) examiner applying the NAATI rubrics. Finally, these translations will be read and assessed by fifteen to twenty native and near-native readers of English, who will evaluate the quality of the translations based on their understanding and perception of these texts. Results analysed to date suggest that a number of student translators faced challenges such as choosing a suitable translation method, omitting some key terms or words during the translation process, and managing their time, all of which may indicate a lack of practice in translating texts of this nature and lack of awareness regarding translation strategies most suitable for the genre.

Keywords: Saudi Arabia Vision 2030, translation, tourism, reader reception, culture, heritage, translator training/competencies

Procedia PDF Downloads 8
1410 Psychosocial Experiences of Black Male Students in Public and Social Spaces on and around a Historically White South African Campus

Authors: Claudia P. Saunderson

Abstract:

Widening of participation in higher education globally has increased diversity of student populations. However, widening participation is more than mere access. Central to the debate about widening participation are social justice issues of authentic inclusion and appropriate support for success for all students in higher education (HE). Given the recent global campaign for 'Black Lives Matter' as well as the worldwide advocacy for justice in the George Floyd case, the importance of the experiences of Black men, were again poignantly foregrounded. The literature abounds with the negative experiences of Black male students in higher education. Much of this literature emanates from the Global North, with little systematic research on black male students' university experiences originating from the Global South. This research, therefore, explores the psychosocial experiences of Black male students at a historically white South African university. Not only are these students' educational or academic adjustment important, but so is their psychosocial adjustment to the institution. The psychosocial adjustment might include emotional well-being, motivation, as well as the student’s perception of how well he fits in or is made to feel welcome at the institution. The study draws on strands of critical race theory (CRT), co-cultural theory (CCT) as well as defining properties of micro-aggression theory (MAT). In the study, CRT, therefore, served as an overarching theory at the macro level, and it comments on the structural dynamics while MAT and CCT rather focussed on the impact of structural arrangements like racialization, at an individual and micro-level. These theories furthermore provided a coherent analytic framework for this study. Using a case study design, this qualitative study, employing focus groups and individual interviews, drew on the psychosocial experiences of twenty Black male students to explore how they navigate this specific historically white campus. The data were analyzed using thematic analysis that provided a systematic procedure for generating codes and themes from the qualitative data. The study found that the combination of race and gender-based micro-aggressions experienced by students included negative stereotyping, criminalization as well as racial profiling and that these experiences impede participants' ability to thrive at the institution. However, participants also shared positive perspectives about the institution. Some of the positive traits of the institution that the participants mentioned were well-aligned administration, good quality of education, as well as various funding opportunities. This study implies that if any HE institution values transformation, it necessitates the exploration and interrogation of potential aspects that are subtly hidden in the institutional culture and environment that might serve as barriers to the transformation process. This positioning is based on a social justice stance and believes that all students are equal and have the right to racially and culturally equitable and appropriate education and support.

Keywords: critical race theory, higher education transformation, micro-aggression, student experience

Procedia PDF Downloads 138
1409 Rohingya Problem and the Impending Crisis: Outcome of Deliberate Denial of Citizenship Status and Prejudiced Refugee Laws in South East Asia

Authors: Priyal Sepaha

Abstract:

A refugee crisis is manifested by challenges, both for the refugees and the asylum giving state. The situation turns into a mega-crisis when the situation is prejudicially handled by the home state, inappropriate refugee laws, exploding refugee population, and above all, no hope of any foreseeable solution or remedy. This paper studies the impact on the capability of stateless Rohingyas to migrate and seek refuge due to the enforcement of rigid criteria of movement imposed both by Myanmar as well as the adjoining countries in the name of national security. This theoretical study identifies the issues and the key factors and players which have precipitated the crisis. It further discusses the possible ramifications in the home, asylum giving, and the adjoining countries for not discharging their roles aptly. Additionally, an attempt has been made to understand the scarce response given to the impending crisis by the regional organizations like SAARC, ASEAN and CHOGAM as well as international organizations like United Nations Human Rights Council, Security Council, Office of High Commissioner for Refugees and so on, in the name of inadequacy of monetary funds and physical resources. Based on the refugee laws and practices pertaining to the case of Rohingyas, this paper analyses that the Rohingya Crisis is in dire need of an effective action plan to curb and resolve the biggest humanitarian crisis situation of the century. This mounting human tragedy can be mitigated permanently, by strengthening existing and creating new interdependencies among all stakeholders, as further ignorance can drive the countries of the Indian Sub-continent, in particular, and South East Asia, by and large into a violent civil war for seizing long-awaited civil rights by the marginalized Rohingyas. To curb this mass crisis, it will require the application of coercive pressure and diplomatic pursuance on the home country to acknowledge the rights of its fleeing citizens. This further necessitates mustering adequate monetary funds and physical resources for the asylum providing state. Additional challenges such as devising mechanisms for the refugee’s safe return, comprehensive planning for their holistic economic development and rehabilitation plan are needed. These, however, can only come into effect with a conscious strive by the regional and international community to fulfil their assigned role.

Keywords: asylum, citizenship, crisis, humanitarian, human rights, refugee, rohingya

Procedia PDF Downloads 133
1408 Performance Evaluation of Various Displaced Left Turn Intersection Designs

Authors: Hatem Abou-Senna, Essam Radwan

Abstract:

With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.

Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model

Procedia PDF Downloads 310
1407 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 84
1406 CertifHy: Developing a European Framework for the Generation of Guarantees of Origin for Green Hydrogen

Authors: Frederic Barth, Wouter Vanhoudt, Marc Londo, Jaap C. Jansen, Karine Veum, Javier Castro, Klaus Nürnberger, Matthias Altmann

Abstract:

Hydrogen is expected to play a key role in the transition towards a low-carbon economy, especially within the transport sector, the energy sector and the (petro)chemical industry sector. However, the production and use of hydrogen only make sense if the production and transportation are carried out with minimal impact on natural resources, and if greenhouse gas emissions are reduced in comparison to conventional hydrogen or conventional fuels. The CertifHy project, supported by a wide range of key European industry leaders (gas companies, chemical industry, energy utilities, green hydrogen technology developers and automobile manufacturers, as well as other leading industrial players) therefore aims to: 1. Define a widely acceptable definition of green hydrogen. 2. Determine how a robust Guarantee of Origin (GoO) scheme for green hydrogen should be designed and implemented throughout the EU. It is divided into the following work packages (WPs). 1. Generic market outlook for green hydrogen: Evidence of existing industrial markets and the potential development of new energy related markets for green hydrogen in the EU, overview of the segments and their future trends, drivers and market outlook (WP1). 2. Definition of “green” hydrogen: step-by-step consultation approach leading to a consensus on the definition of green hydrogen within the EU (WP2). 3. Review of existing platforms and interactions between existing GoO and green hydrogen: Lessons learnt and mapping of interactions (WP3). 4. Definition of a framework of guarantees of origin for “green” hydrogen: Technical specifications, rules and obligations for the GoO, impact analysis (WP4). 5. Roadmap for the implementation of an EU-wide GoO scheme for green hydrogen: the project implementation plan will be presented to the FCH JU and the European Commission as the key outcome of the project and shared with stakeholders before finalisation (WP5 and 6). Definition of Green Hydrogen: CertifHy Green hydrogen is hydrogen from renewable sources that is also CertifHy Low-GHG-emissions hydrogen. Hydrogen from renewable sources is hydrogen belonging to the share of production equal to the share of renewable energy sources (as defined in the EU RES directive) in energy consumption for hydrogen production, excluding ancillary functions. CertifHy Low-GHG hydrogen is hydrogen with emissions lower than the defined CertifHy Low-GHG-emissions threshold, i.e. 36.4 gCO2eq/MJ, produced in a plant where the average emissions intensity of the non-CertifHy Low-GHG hydrogen production (based on an LCA approach), since sign-up or in the past 12 months, does not exceed the emissions intensity of the benchmark process (SMR of natural gas), i.e. 91.0 gCO2eq/MJ.

Keywords: green hydrogen, cross-cutting, guarantee of origin, certificate, DG energy, bankability

Procedia PDF Downloads 493
1405 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout

Authors: Diamant Irene, Dar Tamar

Abstract:

Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.

Keywords: conservation of resources, burnout, time pressure, time perspective

Procedia PDF Downloads 176
1404 Attention Treatment for People With Aphasia: Language-Specific vs. Domain-General Neurofeedback

Authors: Yael Neumann

Abstract:

Attention deficits are common in people with aphasia (PWA). Two treatment approaches address these deficits: domain-general methods like Play Attention, which focus on cognitive functioning, and domain-specific methods like Language-Specific Attention Treatment (L-SAT), which use linguistically based tasks. Research indicates that L-SAT can improve both attentional deficits and functional language skills, while Play Attention has shown success in enhancing attentional capabilities among school-aged children with attention issues compared to standard cognitive training. This study employed a randomized controlled cross-over single-subject design to evaluate the effectiveness of these two attention treatments over 25 weeks. Four PWA participated, undergoing a battery of eight standardized tests measuring language and cognitive skills. The treatments were counterbalanced. Play Attention used EEG sensors to detect brainwaves, enabling participants to manipulate items in a computer game while learning to suppress theta activity and increase beta activity. An algorithm tracked changes in the theta-to-beta ratio, allowing points to be earned during the games. L-SAT, on the other hand, involved hierarchical language tasks that increased in complexity, requiring greater attention from participants. Results showed that for language tests, Participant 1 (moderate aphasia) aligned with existing literature, showing L-SAT was more effective than Play Attention. However, Participants 2 (very severe) and 3 and 4 (mild) did not conform to this pattern; both treatments yielded similar outcomes. This may be due to the extremes of aphasia severity: the very severe participant faced significant overall deficits, making both approaches equally challenging, while the mild participant performed well initially, leaving limited room for improvement. In attention tests, Participants 1 and 4 exhibited results consistent with prior research, indicating Play Attention was superior to L-SAT. Participant 2, however, showed no significant improvement with either program, although L-SAT had a slight edge on the Visual Elevator task, measuring switching and mental flexibility. This advantage was not sustained at the one-month follow-up, likely due to the participant’s struggles with complex attention tasks. Participant 3's results similarly did not align with prior studies, revealing no difference between the two treatments, possibly due to the challenging nature of the attention measures used. Regarding participation and ecological tests, all participants showed similar mild improvements with both treatments. This limited progress could stem from the short study duration, with only five weeks allocated for each treatment, which may not have been enough time to achieve meaningful changes affecting life participation. In conclusion, the performance of participants appeared influenced by their level of aphasia severity. The moderate PWA’s results were most aligned with existing literature, indicating better attention improvement from the domain-general approach (Play Attention) and better language improvement from the domain-specific approach (L-SAT).

Keywords: attention, language, cognitive rehabilitation, neurofeedback

Procedia PDF Downloads 17
1403 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method

Authors: Dangut Maren David, Skaf Zakwan

Abstract:

Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.

Keywords: prognostics, data-driven, imbalance classification, deep learning

Procedia PDF Downloads 174
1402 Motivation and Multiglossia: Exploring the Diversity of Interests, Attitudes, and Engagement of Arabic Learners

Authors: Anna-Maria Ramezanzadeh

Abstract:

Demand for Arabic language is growing worldwide, driven by increased interest in the multifarious purposes the language serves, both for the population of heritage learners and those studying Arabic as a foreign language. The diglossic, or indeed multiglossic nature of the language as used in Arabic speaking communities however, is seldom represented in the content of classroom courses. This disjoint between the nature of provision and students’ expectations can severely impact their engagement with course material, and their motivation to either commence or continue learning the language. The nature of motivation and its relationship to multiglossia is sparsely explored in current literature on Arabic. The theoretical framework here proposed aims to address this gap by presenting a model and instruments for the measurement of Arabic learners’ motivation in relation to the multiple strands of the language. It adopts and develops the Second Language Motivation Self-System model (L2MSS), originally proposed by Zoltan Dörnyei, which measures motivation as the desire to reduce the discrepancy between leaners’ current and future self-concepts in terms of the second language (L2). The tripartite structure incorporates measures of the Current L2 Self, Future L2 Self (consisting of an Ideal L2 Self, and an Ought-To Self), and the L2 Learning Experience. The strength of the self-concepts is measured across three different domains of Arabic: Classical, Modern Standard and Colloquial. The focus on learners’ self-concepts allows for an exploration of the effect of multiple factors on motivation towards Arabic, including religion. The relationship between Islam and Arabic is often given as a prominent reason behind some students’ desire to learn the language. Exactly how and why this factor features in learners’ L2 self-concepts has not yet been explored. Specifically designed surveys and interview protocols are proposed to facilitate the exploration of these constructs. The L2 Learning Experience component of the model is operationalized as learners’ task-based engagement. Engagement is conceptualised as multi-dimensional and malleable. In this model, situation-specific measures of cognitive, behavioural, and affective components of engagement are collected via specially designed repeated post-task self-report surveys on Personal Digital Assistant over multiple Arabic lessons. Tasks are categorised according to language learning skill. Given the domain-specific uses of the different varieties of Arabic, the relationship between learners’ engagement with different types of tasks and their overall motivational profiles will be examined to determine the extent of the interaction between the two constructs. A framework for this data analysis is proposed and hypotheses discussed. The unique combination of situation-specific measures of engagement and a person-oriented approach to measuring motivation allows for a macro- and micro-analysis of the interaction between learners and the Arabic learning process. By combining cross-sectional and longitudinal elements with a mixed-methods design, the model proposed offers the potential for capturing a comprehensive and detailed picture of the motivation and engagement of Arabic learners. The application of this framework offers a number of numerous potential pedagogical and research implications which will also be discussed.

Keywords: Arabic, diglossia, engagement, motivation, multiglossia, sociolinguistics

Procedia PDF Downloads 166
1401 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 195
1400 Testing of Canadian Integrated Healthcare and Social Services Initiatives with an Evidence-Based Case Definition for Healthcare and Social Services Integrations

Authors: S. Cheng, C. Catallo

Abstract:

Introduction: Canada's healthcare and social services systems are failing high risk, vulnerable older adults. Care for vulnerable older Canadians (65 and older) is not optimal in Canada. It does not address the care needs of vulnerable, high risk adults using a holistic approach. Given the growing aging population, and the care needs for seniors with complex conditions is one of the highest in Canada's health care system, there is a sense of urgency to optimize care. Integration of health and social services is an emerging trend in Canada when compared to European countries. There is no common and universal understanding of healthcare and social services integration within the country. Consequently, a clear understanding and definition of integrated health and social services are absent in Canada. Objectives: A study was undertaken to develop a case definition for integrated health and social care initiatives that serve older adults, which was then tested against three Canadian integrated initiatives. Methodology: A limited literature review was undertaken to identify common characteristics of integrated health and social care initiatives that serve older adults, and comprised both scientific and grey literature, in order to develop a case definition. Three Canadian integrated initiatives that are located in the province of Ontario, were identified using an online search and a screening process. They were surveyed to determine if the literature-based integration definition applied to them. Results: The literature showed that there were 24 common healthcare and social services integration characteristics that could be categorized into ten themes: 1) patient-care approach; 2) program goals; 3) measurement; 4) service and care quality; 5) accountability and responsibility; 6) information sharing; 7) Decision-making and problem-solving; 8) culture; 9) leadership; and 10) staff and professional interaction. The three initiatives showed agreement on all the integration characteristics except for those characteristics associated with healthcare and social care professional interaction, collaborative leadership and shared culture. This disagreement may be due to several reasons, including the existing governance divide between the healthcare and social services sectors within the province of Ontario that has created a ripple effect in how professions in the two different sectors interact. In addition, the three initiatives may be at maturing levels of integration, which may explain disagreement on the characteristics associated with leadership and culture. Conclusions: The development of a case definition for healthcare and social services integration that incorporates common integration characteristics can act as a useful instrument in identifying integrated healthcare and social services, particularly given the emerging and evolutionary state of this phenomenon within Canada.

Keywords: Canada, case definition, healthcare and social services integration, integration, seniors health, services delivery

Procedia PDF Downloads 155
1399 Green Organic Chemistry, a New Paradigm in Pharmaceutical Sciences

Authors: Pesaru Vigneshwar Reddy, Parvathaneni Pavan

Abstract:

Green organic chemistry which is the latest and one of the most researched topics now-a- days has been in demand since 1990’s. Majority of the research in green organic chemistry chemicals are some of the important starting materials for greater number of major chemical industries. The production of organic chemicals has raw materials (or) reagents for other application is major sector of manufacturing polymers, pharmaceuticals, pesticides, paints, artificial fibers, food additives etc. organic synthesis on a large scale compound to the labratory scale, involves the use of energy, basic chemical ingredients from the petro chemical sectors, catalyst and after the end of the reaction, seperation, purification, storage, packing distribution etc. During these processes there are many problems of health and safety for workers in addition to the environmental problems caused there by use and deposition as waste. Green chemistry with its 12 principles would like to see changes in conventional way that were used for decades to make synthetic organic chemical and the use of less toxic starting materials. Green chemistry would like to increase the efficiency of synthetic methods, to use less toxic solvents, reduce the stage of synthetic routes and minimize waste as far as practically possible. In this way, organic synthesis will be part of the effort for sustainable development Green chemistry is also interested for research and alternatives innovations on many practical aspects of organic synthesis in the university and research labaratory of institutions. By changing the methodologies of organic synthesis, health and safety will be advanced in the small scale laboratory level but also will be extended to the industrial large scale production a process through new techniques. The three key developments in green chemistry include the use of super critical carbondioxide as green solvent, aqueous hydrogen peroxide as an oxidising agent and use of hydrogen in asymmetric synthesis. It also focuses on replacing traditional methods of heating with that of modern methods of heating like microwaves traditions, so that carbon foot print should reduces as far as possible. Another beneficiary of this green chemistry is that it will reduce environmental pollution through the use of less toxic reagents, minimizing of waste and more bio-degradable biproducts. In this present paper some of the basic principles, approaches, and early achievements of green chemistry has a branch of chemistry that studies the laws of passing of chemical reactions is also considered, with the summarization of green chemistry principles. A discussion about E-factor, old and new synthesis of ibuprofen, microwave techniques, and some of the recent advancements also considered.

Keywords: energy, e-factor, carbon foot print, micro-wave, sono-chemistry, advancement

Procedia PDF Downloads 306
1398 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations

Authors: Jyh Sheen, Yong-Lin Wang

Abstract:

This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.

Keywords: microwave measurement, dielectric constant, mixture rules, composites

Procedia PDF Downloads 367
1397 A Lung Cancer Patient Grief Counseling Nursing Experience

Authors: Syue-Wen Lin

Abstract:

Objective: This article explores the nursing experience of a 64-year-old female lung cancer patient who underwent a thoracoscopic left lower lobectomy and treatment. The patient has a history of diabetes. The nursing process included cancer treatment, postoperative pain management, wound care and healing, and family grief counseling. Methods: The nursing period is from March 11 to March 15, 2024. During this time, strict aseptic wound dressing procedures and advanced wound care techniques are employed to promote wound healing and prevent infection. Postoperatively, due to the development of aspiration pneumonia and worsening symptoms, re-intubation was necessary. Given the patient's advanced cancer and deteriorating condition, the nursing team provided comprehensive grief counseling and care tailored to both the patient's physical and psychological needs, as well as the emotional needs of the family. Considering the complexity of the patient's condition, including advanced cancer, palliative care was also integrated into the overall nursing process to alleviate discomfort and provide psychological support. Results: Using Gordon's Functional Health Patterns for assessment, including evaluating the patient's medical history, physical assessment, and interviews, to provide individualized nursing care, it is important to collect data that will help understand the patient's physical, psychological, social, and spiritual dimensions. The interprofessional critical care team collaborates with the hospice team to help understand the psychological state of the patient's family and develop a comprehensive approach to care. Family meetings should be convened, and support should be provided to patients during the final stages of their lives. Additionally, the combination of cancer care, pain management, wound care, and palliative care ensures comprehensive support for the patient throughout her recovery, thereby improving her quality of life. Conclusion: Lung cancer and aspiration pneumonia present significant challenges to patients, and the nursing team not only provides critical care but also addresses individual patient needs through cancer care, pain management, wound care, and palliative care interventions. These measures have effectively improved the quality of life of patients, provided compassionate palliative care to terminally ill patients, and allowed them to spend the last mile of their lives with their families. Nursing staff work closely with families to develop comprehensive care plans to ensure patients receive high-quality medical care as well as psychological support and a comfortable recovery environment.

Keywords: grief counseling, lung cancer, palliative care, nursing experience

Procedia PDF Downloads 26
1396 Adverse Childhood Experiences and the Sense of Effectiveness and Coping with Emotions among Adolescents Taking Drugs

Authors: Monika Szpringer, Aneta Pawlinska

Abstract:

Adverse childhood experiences are linked to various types of health and adapt problems at different stages of life. They include various types of abuse, neglect, and dysfunctional environment. They have an unfavorable impact on the development of a child and his future functioning in society. Adolescents who were exposed to bad treatment may suffer from health problems during adulthood, like chronic diseases, psychological disorders, drug addiction, and suicide attempts. Objective: The aim of the project is to assess the relationship between adverse childhood experiences and the sense of efficacy and coping with emotions among teenagers aged 16-18 taking drugs. Material And Methods: The research was carried out in the period from March to December 2018 in Mazowieckie, Świętokrzyskie, Łódzkie, and Lubelskie Voivodship. The group consisted of 600 people aged 16-18 (M=16,58; SD=0, 78), men (63,2%) aged 16-18 (M=16,60;SD= 0,78) and women (35,5%) aged 16-18 (M16,55;SD=0,79). Participants included residents from Youth Educational Centers and Youth Sociotherapy Centers. Each participant filled in Author's Questionnaire, Adverse Childhood Questionnaire, then Courtland Emotional Control Scale-CECS and Generalized Self Efficacy Scale-GSES. Results and conclusions: The most common adverse experiences, according to teenagers, were family abuse, divorce/separation/parent's death, overuse of alcohol or drugs by an inmate, and emotional neglect. Adolescents who suffered from five to twelve adverse experiences had a higher level of depression's control. Adverse childhood experiences have an importance for the level of anger and depression's control among teenagers taking drugs. The greatest importance of the level of anger's control has emotional neglect. A higher level of emotional neglect is linked to a lower ability to control anger. The greatest importance of the level of depression's control has physical abuse and emotional neglect. The higher physical abuse during childhood, and the higher frequency of emotional neglect, the bigger the depression's control. The sense of efficacy in the group of people who suffered from one to four adverse experiences is close to the sense of efficacy that suffered people from five to twelve adverse experiences. The most important factor lowering the sense of one's efficacy was the intensification of sexual abuse. It was confirmed that the intensification and frequency of adverse childhood experiences were higher among women than men. Women also characterized lower anger control and greater depression's control. The authors’ own analyses confirmed the relationship between adverse childhood experiences and the sense of efficacy and coping with emotions among teenagers aged 16-18 taking drugs.

Keywords: adolescences, adverse childhood experiences, coping with emotions, drugs

Procedia PDF Downloads 102
1395 Understanding the Role of Nitric Oxide Synthase 1 in Low-Density Lipoprotein Uptake by Macrophages and Implication in Atherosclerosis Progression

Authors: Anjali Roy, Mirza S. Baig

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the formation of lipid rich plaque enriched with necrotic core, modified lipid accumulation, smooth muscle cells, endothelial cells, leucocytes and macrophages. Macrophage foam cells play a critical role in the occurrence and development of inflammatory atherosclerotic plaque. Foam cells are the fat-laden macrophages in the initial stage atherosclerotic lesion formation. Foam cells are an indication of plaque build-up, or atherosclerosis, which is commonly associated with increased risk of heart attack and stroke as a result of arterial narrowing and hardening. The mechanisms that drive atherosclerotic plaque progression remain largely unknown. Dissecting the molecular mechanism involved in process of macrophage foam cell formation will help to develop therapeutic interventions for atherosclerosis. To investigate the mechanism, we studied the role of nitric oxide synthase 1(NOS1)-mediated nitric oxide (NO) on low-density lipoprotein (LDL) uptake by bone marrow derived macrophages (BMDM). Using confocal microscopy, we found that incubation of macrophages with NOS1 inhibitor, TRIM (1-(2-Trifluoromethylphenyl) imidazole) or L-NAME (N omega-nitro-L-arginine methyl ester) prior to LDL treatment significantly reduces the LDL uptake by BMDM. Further, addition of NO donor (DEA NONOate) in NOS1 inhibitor treated macrophages recovers the LDL uptake. Our data strongly suggest that NOS1 derived NO regulates LDL uptake by macrophages and foam cell formation. Moreover, we also checked proinflammatory cytokine mRNA expression through real time PCR in BMDM treated with LDL and copper oxidized LDL (OxLDL) in presences and absences of inhibitor. Normal LDL does not evoke cytokine expression whereas OxLDL induced proinflammatory cytokine expression which significantly reduced in presences of NOS1 inhibitor. Rapid NOS-1-derived NO and its stable derivative formation act as signaling agents for inducible NOS-2 expression in endothelial cells, leading to endothelial vascular wall lining disruption and dysfunctioning. This study highlights the role of NOS1 as critical players of foam cell formation and would reveal much about the key molecular proteins involved in atherosclerosis. Thus, targeting NOS1 would be a useful strategy in reducing LDL uptake by macrophages at early stage of disease and hence dampening the atherosclerosis progression.

Keywords: atherosclerosis, NOS1, inflammation, oxidized LDL

Procedia PDF Downloads 127
1394 Antagonistic Potential of Epiphytic Bacteria Isolated in Kazakhstan against Erwinia amylovora, the Causal Agent of Fire Blight

Authors: Assel E. Molzhigitova, Amankeldi K. Sadanov, Elvira T. Ismailova, Kulyash A. Iskandarova, Olga N. Shemshura, Ainur I. Seitbattalova

Abstract:

Fire blight is a very harmful for commercial apple and pear production quarantine bacterial disease. To date, several different methods have been proposed for disease control, including the use of copperbased preparations and antibiotics, which are not always reliable or effective. The use of bacteria as biocontrol agents is one of the most promising and eco-friendly alternative methods. Bacteria with protective activity against the causal agent of fire blight are often present among the epiphytic microorganisms of the phyllosphere of host plants. Therefore, the main objective of our study was screening of local epiphytic bacteria as possible antagonists against Erwinia amylovora, the causal agent of fire blight. Samples of infected organs of apple and pear trees (shoots, leaves, fruits) were collected from the industrial horticulture areas in various agro-ecological zones of Kazakhstan. Epiphytic microorganisms were isolated by standard and modified methods on specific nutrient media. The primary screening of selected microorganisms under laboratory conditions to determine the ability to suppress the growth of Erwinia amylovora was performed by agar-diffusion-test. Among 142 bacteria isolated from the fire blight host plants, 5 isolates, belonging to the genera Bacillus, Lactobacillus, Pseudomonas, Paenibacillus and Pantoea showed higher antagonistic activity against the pathogen. The diameters of inhibition zone have been depended on the species and ranged from 10 mm to 48 mm. The maximum diameter of inhibition zone (48 mm) was exhibited by B. amyloliquefaciens. Less inhibitory effect was showed by Pantoea agglomerans PA1 (19 mm). The study of inhibitory effect of Lactobacillus species against E. amylovora showed that among 7 isolates tested only one (Lactobacillus plantarum 17M) demonstrated inhibitory zone (30 mm). In summary, this study was devoted to detect the beneficial epiphytic bacteria from plants organs of pear and apple trees due to fire blight control in Kazakhstan. Results obtained from the in vitro experiments showed that the most efficient bacterial isolates are Lactobacillus plantarum 17M, Bacillus amyloliquefaciens MB40, and Pantoea agglomerans PA1. These antagonists are suitable for development as biocontrol agents for fire blight control. Their efficacies will be evaluated additionally, in biological tests under in vitro and field conditions during our further study.

Keywords: antagonists, epiphytic bacteria, Erwinia amylovora, fire blight

Procedia PDF Downloads 167
1393 Dealing with the Spaces: Ultra Conservative Approach from Childhood to Adulthood

Authors: Maryam Firouzmandi, Moosa Miri

Abstract:

Common reasons for early tooth loss are trauma, extraction due to caries or periodontal disease and congenital missing. The remaining space after tooth loss may cause functional and esthetic problems. Therefore restorative dentists should attempt to manage these spaces using conservative methods. The goal is to restore the lost esthetic and function, prevent phonetic, self-esteem and personality problems and tongue habits. Preserving alveolar bone is also of great importance during the growth stage. Purpose: When deciding about the management of the missing tooth, space implants are contradicted until the completion of dentoalveolar development. Even in adulthood, due to systemic or periodontal problems or biological and economic issues, the implant might not be indicated. In this article, the alternative conservative restorative methods of space maintenance are going to be discussed. Essix retainers are made chair-side as easy as forming a custom bleaching tray with some modifications. They are esthetically acceptable and not expensive. These temporaries provide support for the lips but could not be used during function. Mini-screw-supported temporaries are another option for maintaining the space, especially after orthodontic treatment when there is a time lag between the termination of orthodontic treatment and definitive restoration. Two techniques will be presented for this kind of restoration: Denture tooth pontic or a composite crown. The benefits are alveolar bone preservation, Physiologic pressure on the alveolar ridge to increase its density and even can be retained until the completion of the definitive treatment. Bonded fixed partial denture includes Maryland bridge, fiber-reinforced composite bridge, resin-bonded bridge, and ceramic bonded bridge. These types of bridges are recommended to be used after a pubertal growth spurt and a recent meta-analysis considered their clinical success similar to conventional FDPs and implant-supported crowns. However, they have several advantages that are going to be discussed by presenting some clinical examples. Practical instruction on how to construct an FRC bridge and a novel chair-side Maryland bridge will be given by means of clinical cases. Clinical relevance: minimally invasive options should always be considered and destruction of healthy enamel and dentin during the preparation phase should be avoided as much as possible.

Keywords: tooth missing, fiber-reinforced composite, Maryland, Essix retainers, screw-retained restoration

Procedia PDF Downloads 198
1392 Influence of Controlled Retting on the Quality of the Hemp Fibres Harvested at the Seed Maturity by Using a Designed Lab-Scale Pilot Unit

Authors: Brahim Mazian, Anne Bergeret, Jean-Charles Benezet, Sandrine Bayle, Luc Malhautier

Abstract:

Hemp fibers are increasingly used as reinforcements in polymer matrix composites due to their competitive performance (low density, mechanical properties and biodegradability) compared to conventional fibres such as glass fibers. However, the huge variation of their biochemical, physical and mechanical properties limits the use of these natural fibres in structural applications when high consistency and homogeneity are required. In the hemp industry, traditional processes termed field retting are commonly used to facilitate the extraction and separation of stem fibers. This retting treatment consists to spread out the stems on the ground for a duration ranging from a few days to several weeks. Microorganisms (fungi and bacteria) grow on the stem surface and produce enzymes that degrade pectinolytic substances in the middle lamellae surrounding the fibers. This operation depends on the weather conditions and is currently carried out very empirically in the fields so that a large variability in the hemp fibers quality (mechanical properties, color, morphology, chemical composition…) is resulting. Nonetheless, if controlled, retting might be favorable for good properties of hemp fibers and then of hemp fibers reinforced composites. Therefore, the present study aims to investigate the influence of controlled retting within a designed environmental chamber (lab-scale pilot unit) on the quality of the hemp fibres harvested at the seed maturity growth stage. Various assessments were applied directly on fibers: color observations, morphological (optical microscope), surface (ESEM), biochemical (gravimetry) analysis, spectrocolorimetric measurements (pectins content), thermogravimetric analysis (TGA) and tensile testing. The results reveal that controlled retting leads to a rapid change of color from yellow to dark grey due to development of microbial communities (fungi and bacteria) at the stem surface. An increase of thermal stability of fibres due to the removal of non-cellulosic components along retting is also observed. A separation of bast fibers to elementary fibers occurred with an evolution of chemical composition (degradation of pectins) and a rapid decrease in tensile properties (380MPa to 170MPa after 3 weeks) due to accelerated retting process. The influence of controlled retting on the biocomposite material (PP / hemp fibers) properties is under investigation.

Keywords: controlled retting, hemp fibre, mechanical properties, thermal stability

Procedia PDF Downloads 155
1391 Fine-Scale Modeling the Influencing Factors of Multi-Time Dimensions of Transit Ridership at Station Level: The Study of Guangzhou City

Authors: Dijiang Lyu, Shaoying Li, Zhangzhi Tan, Zhifeng Wu, Feng Gao

Abstract:

Nowadays, China is experiencing rapidly urban rail transit expansions in the world. The purpose of this study is to finely model factors influencing transit ridership at multi-time dimensions within transit stations’ pedestrian catchment area (PCA) in Guangzhou, China. This study was based on multi-sources spatial data, including smart card data, high spatial resolution images, points of interest (POIs), real-estate online data and building height data. Eight multiple linear regression models using backward stepwise method and Geographic Information System (GIS) were created at station-level. According to Chinese code for classification of urban land use and planning standards of development land, residential land-use were divided into three categories: first-level (e.g. villa), second-level (e.g. community) and third-level (e.g. urban villages). Finally, it concluded that: (1) four factors (CBD dummy, number of feeder bus route, number of entrance or exit and the years of station operation) were proved to be positively correlated with transit ridership, but the area of green land-use and water land-use negative correlated instead. (2) The area of education land-use, the second-level and third-level residential land-use were found to be highly connected to the average value of morning peak boarding and evening peak alighting ridership. But the area of commercial land-use and the average height of buildings, were significantly positive associated with the average value of morning peak alighting and evening peak boarding ridership. (3) The area of the second-level residential land-use was rarely correlated with ridership in other regression models. Because private car ownership is still large in Guangzhou now, and some residents living in the community around the stations go to work by transit at peak time, but others are much more willing to drive their own car at non-peak time. The area of the third-level residential land-use, like urban villages, was highly positive correlated with ridership in all models, indicating that residents who live in the third-level residential land-use are the main passenger source of the Guangzhou Metro. (4) The diversity of land-use was found to have a significant impact on the passenger flow on the weekend, but was non-related to weekday. The findings can be useful for station planning, management and policymaking.

Keywords: fine-scale modeling, Guangzhou city, multi-time dimensions, multi-sources spatial data, transit ridership

Procedia PDF Downloads 142
1390 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification

Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas

Abstract:

Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.

Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles

Procedia PDF Downloads 232
1389 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method

Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola

Abstract:

The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.

Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization

Procedia PDF Downloads 389
1388 Valuing Cultural Ecosystem Services of Natural Treatment Systems Using Crowdsourced Data

Authors: Andrea Ghermandi

Abstract:

Natural treatment systems such as constructed wetlands and waste stabilization ponds are increasingly used to treat water and wastewater from a variety of sources, including stormwater and polluted surface water. The provision of ancillary benefits in the form of cultural ecosystem services makes these systems unique among water and wastewater treatment technologies and greatly contributes to determine their potential role in promoting sustainable water management practices. A quantitative analysis of these benefits, however, has been lacking in the literature. Here, a critical assessment of the recreational and educational benefits in natural treatment systems is provided, which combines observed public use from a survey of managers and operators with estimated public use as obtained using geotagged photos from social media as a proxy for visitation rates. Geographic Information Systems (GIS) are used to characterize the spatial boundaries of 273 natural treatment systems worldwide. Such boundaries are used as input for the Application Program Interfaces (APIs) of two popular photo-sharing websites (Flickr and Panoramio) in order to derive the number of photo-user-days, i.e., the number of yearly visits by individual photo users in each site. The adequateness and predictive power of four univariate calibration models using the crowdsourced data as a proxy for visitation are evaluated. A high correlation is found between photo-user-days and observed annual visitors (Pearson's r = 0.811; p-value < 0.001; N = 62). Standardized Major Axis (SMA) regression is found to outperform Ordinary Least Squares regression and count data models in terms of predictive power insofar as standard verification statistics – such as the root mean square error of prediction (RMSEP), the mean absolute error of prediction (MAEP), the reduction of error (RE), and the coefficient of efficiency (CE) – are concerned. The SMA regression model is used to estimate the intensity of public use in all 273 natural treatment systems. System type, influent water quality, and area are found to statistically affect public use, consistently with a priori expectations. Publicly available information regarding the home location of the sampled visitors is derived from their social media profiles and used to infer the distance they are willing to travel to visit the natural treatment systems in the database. Such information is analyzed using the travel cost method to derive monetary estimates of the recreational benefits of the investigated natural treatment systems. Overall, the findings confirm the opportunities arising from an integrated design and management of natural treatment systems, which combines the objectives of water quality enhancement and provision of cultural ecosystem services through public use in a multi-functional approach and compatibly with the need to protect public health.

Keywords: constructed wetlands, cultural ecosystem services, ecological engineering, waste stabilization ponds

Procedia PDF Downloads 180
1387 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers

Authors: B. Neethu, Diptesh Das

Abstract:

The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.

Keywords: bridge, semi active control, sliding mode control, MR damper

Procedia PDF Downloads 124
1386 Influence of Genotypic Variability on Symbiotic and Agrophysiological Performances of Chickpea Under Mesorhizobium-PSB Inoculation and RP-Fertilization Likely Due to Shipping Rhizosphere Diversity

Authors: Rym Saidi, Pape Alioune Ndiaye, Mohamed Idbella, Ammar Ibnyasser, Zineb Rchiad, Issam Kadmiri Meftahi, Khalid Daoui, Adnane Bargaz

Abstract:

Chickpea (Cicer arietinum L.) is an important leguminous crop grown worldwide, and the second most important food legume in Morocco. In addition, that chickpea plays a significant role in humans’ dietary consumption, it has key ecological interest in terms of biological N-fixation (BNF) having the ability to symbiotically secure 20-80% of needed. Alongside nitrogen (N), low soil phosphorus (P) availability is one of the major factors limiting chickpea growth and productivity. After nitrogen, P is the most important macronutrient for plants growth and development as well as the BNF. In the context of improving chickpea symbiotic performance, co-application of beneficial bacterial inoculants (including Mesorhizobium) and Rock P-fertilizer could boost chickpea performance and productivity, owing to increasing P-utilization efficiency and overall nutrient acquisition under P-deficiency conditions. Greenhouse experiment was conducted to evaluate the response of two chickpea varieties (Arifi “A” and Bochra “B”) to co-application of RP-fertilizer alongside Mesorhizobium and phosphate solubilizing bacteria (PSB) consortium under P-deficient soil in Morocco. Our findings demonstrate that co-applying RP50 with bacterial inoculant significantly increased NDW by 85.71% and 109.09% in A and B chickpea varieties respectively, compared to uninoculated RP-fertilized plants. Nodule Pi and leghemoglobin (LHb) contents also increased in RP-fertilized bacterial inoculants plants. Likewise, shoot and root dry weights of both chickpea varieties increased with bacterial inoculation and RP-fertilization. This is due to enhanced Pi content in shoot (282.54% and 291.42%) and root (334.30% and 408.32%) in response to RP50-Inc compared to unfertilized uninoculated plants, for A and B chickpea varieties respectively. Rhizosphere available P was also increased by 173.86% and 182.25% in response to RP50-Inc as compared to RP-fertilized uninoculated plants, with a positive correlation between soil available P and root length in inoculated plants of A. and B. chickpea varieties (R= 0.49; 0.6) respectively. Furthermore, Mesorhizobium was among the dominant genera in rhizosphere bacterial diversity of both chickpea varieties. This can be attributed to its capacity to enhance plant growth traits, with a more pronounced effect observed in B. variety. Our research demonstrates that integrated fertilization with bacterial inoculation effectively improves biological N-fixation and P nutrition, enhancing the agrophysiological performance of Moroccan chickpea varieties, particularly in restricted P-availability conditions.

Keywords: chickpea varieties, bacterial consortium, inoculants, Mesorhizobium, Rock-P fertilizer, phosphorus deficiency, agrophysiological performance

Procedia PDF Downloads 20
1385 A Review of Brain Implant Device: Current Developments and Applications

Authors: Ardiansyah I. Ryan, Ashsholih K. R., Fathurrohman G. R., Kurniadi M. R., Huda P. A

Abstract:

The burden of brain-related disease is very high. There are a lot of brain-related diseases with limited treatment result and thus raise the burden more. The Parkinson Disease (PD), Mental Health Problem, or Paralysis of extremities treatments had risen concern, as the patients for those diseases usually had a low quality of life and low chance to recover fully. There are also many other brain or related neural diseases with the similar condition, mainly the treatments for those conditions are still limited as our understanding of the brain function is insufficient. Brain Implant Technology had given hope to help in treating this condition. In this paper, we examine the current update of the brain implant technology. Neurotechnology is growing very rapidly worldwide. The United States Food and Drug Administration (FDA) has approved the use of Deep Brain Stimulation (DBS) as a brain implant in humans. As for neural implant both the cochlear implant and retinal implant are approved by FDA too. All of them had shown a promising result. DBS worked by stimulating a specific region in the brain with electricity. This device is planted surgically into a very specific region of the brain. This device consists of 3 main parts: Lead (thin wire inserted into the brain), neurostimulator (pacemaker-like device, planted surgically in the chest) and an external controller (to turn on/off the device by patient/programmer). FDA had approved DBS for the treatment of PD, Pain Management, Epilepsy and Obsessive Compulsive Disorder (OCD). The target treatment of DBS in PD is to reduce the tremor and dystonia symptoms. DBS has been showing the promising result in animal and limited human trial for other conditions such as Alzheimer, Mental Health Problem (Major Depression, Tourette Syndrome), etc. Every surgery has risks of complications, although in DBS the chance is very low. DBS itself had a very satisfying result as long as the subject criteria to be implanted this device based on indication and strictly selection. Other than DBS, there are several brain implant devices that still under development. It was included (not limited to) implant to treat paralysis (In Spinal Cord Injury/Amyotrophic Lateral Sclerosis), enhance brain memory, reduce obesity, treat mental health problem and treat epilepsy. The potential of neurotechnology is unlimited. When brain function and brain implant were fully developed, it may be one of the major breakthroughs in human history like when human find ‘fire’ for the first time. Support from every sector for further research is very needed to develop and unveil the true potential of this technology.

Keywords: brain implant, deep brain stimulation (DBS), deep brain stimulation, Parkinson

Procedia PDF Downloads 155
1384 Determinants of Domestic Violence among Married Women Aged 15-49 Years in Sierra Leone by an Intimate Partner: A Cross-Sectional Study

Authors: Tesfaldet Mekonnen Estifanos, Chen Hui, Afewerki Weldezgi

Abstract:

Background: Intimate partner violence (hereafter IPV) is a major global public health challenge that tortures and disables women in the place where they are ought to be most secure within their own families. The fact that the family unit is commonly viewed as a private circle, violent acts towards women remains undermined. There are limited research and knowledge about the influencing factors linked to IPV in Sierra Leone. This study, therefore, estimates the prevalence rate and the predicting factors associated with IPV. Methods: Data were taken from Sierra-Leone Demographic and Health Survey (SDHS, 2013): the first in its form to incorporate information on domestic violence. Multistage cluster sampling research design was used, and information was gathered by a standard questionnaire. A total of 5185 respondents selected were interviewed, out of whom 870 were never been in union, thus excluded. To analyze the two dependent variables: experience of IPV, ‘ever’ and 'last 12 months prior to the survey', a total of 4315 (currently or formerly married) and 4029 women (currently in union) were included respectively. These dependent variables were constructed from the three forms of violence namely physical, emotional and sexual. Data analysis was applied using SPSS version 23, comprising three-step process. First, descriptive statistics were used to show the frequency distribution of both the outcome and explanatory variables. Second, bivariate analysis adopting chi-square test was applied to assess the individual relationship between the outcome and explanatory variables. Third, multivariate logistic regression analysis was undertaken using hierarchical modeling strategy to identify the influence of the explanatory variables on the outcome variables. Odds ratio (OR) and 95% confidence interval (CI) were utilized to examine the association of the variables considering p-values less than 0.05 statistically significant. Results: The prevalence of lifetime IPV among ever married women was 48.4%, while 39.8% of those currently married experienced IPV in the previous year preceding the survey. Women having 1 to 4 and more than 5 number of ever born babies were almost certain to encounter lifetime IPV. However, women who own a property, and those who referenced 3-5 reasons for which wife-beating is acceptable were less probably to experience lifetime IPV. Attesting parental violence, partner’s dominant marital behavior, and women afraid of their partner were the variables related to both experience of IPV ‘ever’ and ‘the previous year prior to the survey’. Respondents who concur that wife-beating is sensible in certain situations and occupations under the professional category had diminished chances of revealing IPV in the year prior to the data collection. Conclusion: This study indicated that factors significantly correlated with IPV in Sierra-Leone are mostly linked with husband related factors specifically, marital controlling behaviors. Addressing IPV in Sierra-Leone requires joint efforts that target men raise awareness to address controlling behavior and empower security in affiliations.

Keywords: husband behavior, married women, partner violence, Sierra Leone

Procedia PDF Downloads 134