Search results for: numerical weather prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6260

Search results for: numerical weather prediction

200 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 189
199 Correlation of Clinical and Sonographic Findings with Cytohistology for Diagnosis of Ovarian Tumours

Authors: Meenakshi Barsaul Chauhan, Aastha Chauhan, Shilpa Hurmade, Rajeev Sen, Jyotsna Sen, Monika Dalal

Abstract:

Introduction: Ovarian masses are common forms of neoplasm in women and represent 2/3rd of gynaecological malignancies. A pre-operative suggestion of malignancy can guide the gynecologist to refer women with suspected pelvic mass to a gynecological oncologist for appropriate therapy and optimized treatment, which can improve survival. In the younger age group preoperative differentiation into benign or malignant pathology can decide for conservative or radical surgery. Imaging modalities have a definite role in establishing the diagnosis. By using International Ovarian Tumor Analysis (IOTA) classification with sonography, costly radiological methods like Magnetic Resonance Imaging (MRI) / computed tomography (CT) scan can be reduced, especially in developing countries like India. Thus, this study is being undertaken to evaluate the role of clinical methods and sonography for diagnosis of the nature of the ovarian tumor. Material And Methods: This prospective observational study was conducted on 40 patients presenting with ovarian masses, in the Department of Obstetrics and Gynaecology, at a tertiary care center in northern India. Functional cysts were excluded. Ultrasonography and color Doppler were performed on all the cases.IOTA rules were applied, which take into account locularity, size, presence of solid components, acoustic shadow, dopper flow etc . Magnetic Resonance Imaging (MRI) / computed tomography (CT) scans abdomen and pelvis were done in cases where sonography was inconclusive. In inoperable cases, Fine needle aspiration cytology (FNAC) was done. The histopathology report after surgery and cytology report after FNAC was correlated statistically with the pre-operative diagnosis made clinically and sonographically using IOTA rules. Statistical Analysis: Descriptive measures were analyzed by using mean and standard deviation and the Student t-test was applied and the proportion was analyzed by applying the chi-square test. Inferential measures were analyzed by sensitivity, specificity, negative predictive value, and positive predictive value. Results: Provisional diagnosis of the benign tumor was made in 16(42.5%) and of the malignant tumor was made in 24(57.5%) patients on the basis of clinical findings. With IOTA simple rules on sonography, 15(37.5%) were found to be benign, while 23 (57.5%) were found to be malignant and findings were inconclusive in 2 patients (5%). FNAC/Histopathology reported that benign ovarian tumors were 14 (35%) and 26(65%) were malignant, which was taken as the gold standard. The clinical finding alone was found to have a sensitivity of 66.6% and a specificity of 90.9%. USG alone had a sensitivity of 86% and a specificity of 80%. When clinical findings and IOTA simple rules of sonography were combined (excluding inconclusive masses), the sensitivity and specificity were 83.3% and 92.3%, respectively. While including inconclusive masses, sensitivity came out to be 91.6% and specificity was 89.2. Conclusion: IOTA's simple sonography rules are highly sensitive and specific in the prediction of ovarian malignancy and also easy to use and easily reproducible. Thus, combining clinical examination with USG will help in the better management of patients in terms of time, cost and better prognosis. This will also avoid the need for costlier modalities like CT, and MRI.

Keywords: benign, international ovarian tumor analysis classification, malignant, ovarian tumours, sonography

Procedia PDF Downloads 80
198 Impacts of Climate Change and Natural Gas Operations on the Hydrology of Northeastern BC, Canada: Quantifying the Water Budget for Coles Lake

Authors: Sina Abadzadesahraei, Stephen Déry, John Rex

Abstract:

Climate research has repeatedly identified strong associations between anthropogenic emissions of ‘greenhouses gases’ and observed increases of global mean surface air temperature over the past century. Studies have also demonstrated that the degree of warming varies regionally. Canada is not exempt from this situation, and evidence is mounting that climate change is beginning to cause diverse impacts in both environmental and socio-economic spheres of interest. For example, northeastern British Columbia (BC), whose climate is controlled by a combination of maritime, continental and arctic influences, is warming at a greater rate than the remainder of the province. There are indications that these changing conditions are already leading to shifting patterns in the region’s hydrological cycle, and thus its available water resources. Coincident with these changes, northeastern BC is undergoing rapid development for oil and gas extraction: This depends largely on subsurface hydraulic fracturing (‘fracking’), which uses enormous amounts of freshwater. While this industrial activity has made substantial contributions to regional and provincial economies, it is important to ensure that sufficient and sustainable water supplies are available for all those dependent on the resource, including ecological systems. In this turn demands a comprehensive understanding of how water in all its forms interacts with landscapes, the atmosphere, and of the potential impacts of changing climatic conditions on these processes. The aim of this study is therefore to characterize and quantify all components of the water budget in the small watershed of Coles Lake (141.8 km², 100 km north of Fort Nelson, BC), through a combination of field observations and numerical modelling. Baseline information will aid the assessment of the sustainability of current and future plans for freshwater extraction by the oil and gas industry, and will help to maintain the precarious balance between economic and environmental well-being. This project is a perfect example of interdisciplinary research, in that it not only examines the hydrology of the region but also investigates how natural gas operations and growth can affect water resources. Therefore, a fruitful collaboration between academia, government and industry has been established to fulfill the objectives of this research in a meaningful manner. This project aims to provide numerous benefits to BC communities. Further, the outcome and detailed information of this research can be a huge asset to researchers examining the effect of climate change on water resources worldwide.

Keywords: northeastern British Columbia, water resources, climate change, oil and gas extraction

Procedia PDF Downloads 264
197 Investigations on Pyrolysis Model for Radiatively Dominant Diesel Pool Fire Using Fire Dynamic Simulator

Authors: Siva K. Bathina, Sudheer Siddapureddy

Abstract:

Pool fires are formed when the flammable liquid accidentally spills on the ground or water and ignites. Pool fire is a kind of buoyancy-driven and diffusion flame. There have been many pool fire accidents caused during processing, handling and storing of liquid fuels in chemical and oil industries. Such kind of accidents causes enormous damage to property as well as the loss of lives. Pool fires are complex in nature due to the strong interaction among the combustion, heat and mass transfers and pyrolysis at the fuel surface. Moreover, the experimental study of such large complex fires involves fire safety issues and difficulties in performing experiments. In the present work, large eddy simulations are performed to study such complex fire scenarios using fire dynamic simulator. A 1 m diesel pool fire is considered for the studied cases, and diesel is chosen as it is most commonly involved fuel in fire accidents. Fire simulations are performed by specifying two different boundary conditions: one the fuel is in liquid state and pyrolysis model is invoked, and the other by assuming the fuel is initially in a vapor state and thereby prescribing the mass loss rate. A domain of size 11.2 m × 11.2 m × 7.28 m with uniform structured grid is chosen for the numerical simulations. Grid sensitivity analysis is performed, and a non-dimensional grid size of 12 corresponding to 8 cm grid size is considered. Flame properties like mass burning rate, irradiance, and time-averaged axial flame temperature profile are predicted. The predicted steady-state mass burning rate is 40 g/s and is within the uncertainty limits of the previously reported experimental data (39.4 g/s). Though the profile of the irradiance at a distance from the fire along the height is somewhat in line with the experimental data and the location of the maximum value of irradiance is shifted to a higher location. This may be due to the lack of sophisticated models for the species transportation along with combustion and radiation in the continuous zone. Furthermore, the axial temperatures are not predicted well (for any of the boundary conditions) in any of the zones. The present study shows that the existing models are not sufficient enough for modeling blended fuels like diesel. The predictions are strongly dependent on the experimental values of the soot yield. Future experiments are necessary for generalizing the soot yield for different fires.

Keywords: burning rate, fire accidents, fire dynamic simulator, pyrolysis

Procedia PDF Downloads 201
196 Non-Perturbative Vacuum Polarization Effects in One- and Two-Dimensional Supercritical Dirac-Coulomb System

Authors: Andrey Davydov, Konstantin Sveshnikov, Yulia Voronina

Abstract:

There is now a lot of interest to the non-perturbative QED-effects, caused by diving of discrete levels into the negative continuum in the supercritical static or adiabatically slowly varying Coulomb fields, that are created by the localized extended sources with Z > Z_cr. Such effects have attracted a considerable amount of theoretical and experimental activity, since in 3+1 QED for Z > Z_cr,1 ≈ 170 a non-perturbative reconstruction of the vacuum state is predicted, which should be accompanied by a number of nontrivial effects, including the vacuum positron emission. Similar in essence effects should be expected also in both 2+1 D (planar graphene-based hetero-structures) and 1+1 D (one-dimensional ‘hydrogen ion’). This report is devoted to the study of such essentially non-perturbative vacuum effects for the supercritical Dirac-Coulomb systems in 1+1D and 2+1D, with the main attention drawn to the vacuum polarization energy. Although the most of works considers the vacuum charge density as the main polarization observable, vacuum energy turns out to be not less informative and in many respects complementary to the vacuum density. Moreover, the main non-perturbative effects, which appear in vacuum polarization for supercritical fields due to the levels diving into the lower continuum, show up in the behavior of vacuum energy even more clear, demonstrating explicitly their possible role in the supercritical region. Both in 1+1D and 2+1D, we explore firstly the renormalized vacuum density in the supercritical region using the Wichmann-Kroll method. Thereafter, taking into account the results for the vacuum density, we formulate the renormalization procedure for the vacuum energy. To evaluate the latter explicitly, an original technique, based on a special combination of analytical methods, computer algebra tools and numerical calculations, is applied. It is shown that, for a wide range of the external source parameters (the charge Z and size R), in the supercritical region the renormalized vacuum energy could significantly deviate from the perturbative quadratic growth up to pronouncedly decreasing behavior with jumps by (-2 x mc^2), which occur each time, when the next discrete level dives into the negative continuum. In the considered range of variation of Z and R, the vacuum energy behaves like ~ -Z^2/R in 1+1D and ~ -Z^3/R in 2+1D, exceeding deeply negative values. Such behavior confirms the assumption of the neutral vacuum transmutation into the charged one, and thereby of the spontaneous positron emission, accompanying the emergence of the next vacuum shell due to the total charge conservation. To the end, we also note that the methods, developed for the vacuum energy evaluation in 2+1 D, with minimal complements could be carried over to the three-dimensional case, where the vacuum energy is expected to be ~ -Z^4/R and so could be competitive with the classical electrostatic energy of the Coulomb source.

Keywords: non-perturbative QED-effects, one- and two-dimensional Dirac-Coulomb systems, supercritical fields, vacuum polarization

Procedia PDF Downloads 202
195 Non-Newtonian Fluid Flow Simulation for a Vertical Plate and a Square Cylinder Pair

Authors: Anamika Paul, Sudipto Sarkar

Abstract:

The flow behaviour of non-Newtonian fluid is quite complicated, although both the pseudoplastic (n < 1, n being the power index) and dilatant (n > 1) fluids under this category are used immensely in chemical and process industries. A limited research work is carried out for flow over a bluff body in non-Newtonian flow environment. In the present numerical simulation we control the vortices of a square cylinder by placing an upstream vertical splitter plate for pseudoplastic (n=0.8), Newtonian (n=1) and dilatant (n=1.2) fluids. The position of the upstream plate is also varied to calculate the critical distance between the plate and cylinder, below which the cylinder vortex shedding suppresses. Here the Reynolds number is considered as Re = 150 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid), which comes under laminar periodic vortex shedding regime. The vertical plate is having a dimension of 0.5a × 0.05a and it is placed at the cylinder centre-line. Gambit 2.2.30 is used to construct the flow domain and to impose the boundary conditions. In detail, we imposed velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. The unsteady 2-D Navier Stokes equations in fully conservative form are then discretized in second-order spatial and first-order temporal form. These discretized equations are then solved by Ansys Fluent 14.5 implementing SIMPLE algorithm written in finite volume method. Here, fine meshing is used surrounding the plate and cylinder. Away from the cylinder, the grids are slowly stretched out in all directions. To get an account of mesh quality, a total of 297 × 208 grid points are used for G/a = 3 (G being the gap between the plate and cylinder) in the streamwise and flow-normal directions respectively after a grid independent study. The computed mean flow quantities obtained from Newtonian flow are agreed well with the available literatures. The results are depicted with the help of instantaneous and time-averaged flow fields. Qualitative and quantitative noteworthy differences are obtained in the flow field with the changes in rheology of fluid. Also, aerodynamic forces and vortex shedding frequencies differ with the gap-ratio and power index of the fluid. We can conclude from the present simulation that fluent is capable to capture the vortex dynamics of unsteady laminar flow regime even in the non-Newtonian flow environment.

Keywords: CFD, critical gap-ratio, splitter plate, wake-wake interactions, dilatant, pseudoplastic

Procedia PDF Downloads 112
194 Influencing Factors for Job Satisfaction and Turnover Intention of Surgical Team in the Operating Rooms

Authors: Shu Jiuan Chen, Shu Fen Wu, I. Ling Tsai, Chia Yu Chen, Yen Lin Liu, Chen-Fuh Lam

Abstract:

Background: Increased emotional stress in workplace and depressed job satisfaction may significantly affect the turnover intention and career life of personnel. However, very limited studies have reported the factors influencing the turnover intention of the surgical team members in the operating rooms, where extraordinary stress is normally exit in this isolated medical care unit. Therefore, this study aimed to determine the environmental and personal characteristic factors that might be associated with job satisfaction and turnover intention in the non-physician staff who work in the operating rooms. Methods: This was a cross-sectional, descriptive study performed in a metropolitan teaching hospital in southern Taiwan between May 2017 to July 2017. A structured self-administered questionnaire, modified from the Practice Environment Scale of the Nursing Work Index (PES-NWI), Occupational Stress Indicator-2 (OSI-2) and Maslach Burnout Inventory (MBI) manual was collected from the operating room nurses, nurse anesthetists, surgeon assistants, orderly and other non-physician staff. Numerical and categorical data were analyzed using unpaired t-test and Chi-square test, as appropriate (SPSS, version 20.0). Results: A total of 167 effective questionnaires were collected from 200 eligible, non-physician personnel who worked in the operating room (response rate 83.5%). The overall satisfaction of all responders was 45.64 ± 7.17. In comparison to those who had more than 4-year working experience in the operating rooms, the junior staff ( ≤ 4-year experience) reported to have significantly higher satisfaction in workplace environment and job contentment, as well as lower intention to quit (t = 6.325, P =0.000). Among the different specialties of surgical team members, nurse anesthetists were associated with significantly lower levels of job satisfaction (P=0.043) and intention to stay (x² = 8.127, P < 0.05). Multivariate regression analysis demonstrates job title, seniority, working shifts and job satisfaction are the significant independent predicting factors for quit jobs. Conclusion: The results of this study highlight that increased work seniorities ( > 4-year working experience) are associated with significantly lower job satisfaction, and they are also more likely to leave their current job. Increased workload in supervising the juniors without appropriate job compensation (such as promotions in job title and work shifts) may precipitate their intention to quit. Since the senior staffs are usually the leaders and core members in the operating rooms, the retention of this fundamental manpower is essential to ensure the safety and efficacy of surgical interventions in the operating rooms.

Keywords: surgical team, job satisfaction, resignation intention, operating room

Procedia PDF Downloads 255
193 Integrating Computer-Aided Manufacturing and Computer-Aided Design for Streamlined Carpentry Production in Ghana

Authors: Benson Tette, Thomas Mensah

Abstract:

As a developing country, Ghana has a high potential to harness the economic value of every industry. Two of the industries that produce below capacity are handicrafts (for instance, carpentry) and information technology (i.e., computer science). To boost production and maintain competitiveness, the carpentry sector in Ghana needs more effective manufacturing procedures that are also more affordable. This issue can be resolved using computer-aided manufacturing (CAM) technology, which automates the fabrication process and decreases the amount of time and labor needed to make wood goods. Yet, the integration of CAM in carpentry-related production is rarely explored. To streamline the manufacturing process, this research investigates the equipment and technology that are currently used in the Ghanaian carpentry sector for automated fabrication. The research looks at the various CAM technologies, such as Computer Numerical Control routers, laser cutters, and plasma cutters, that are accessible to Ghanaian carpenters yet unexplored. We also investigate their potential to enhance the production process. To achieve the objective, 150 carpenters, 15 software engineers, and 10 policymakers were interviewed using structured questionnaires. The responses provided by the 175 respondents were processed to eliminate outliers and omissions were corrected using multiple imputations techniques. The processed responses were analyzed through thematic analysis. The findings showed that adaptation and integration of CAD software with CAM technologies would speed up the design-to-manufacturing process for carpenters. It must be noted that achieving such results entails first; examining the capabilities of current CAD software, then determining what new functions and resources are required to improve the software's suitability for carpentry tasks. Responses from both carpenters and computer scientists showed that it is highly practical and achievable to streamline the design-to-manufacturing process through processes such as modifying and combining CAD software with CAM technology. Making the carpentry-software integration program more useful for carpentry projects would necessitate investigating the capabilities of the current CAD software and identifying additional features in the Ghanaian ecosystem and tools that are required. In conclusion, the Ghanaian carpentry sector has a chance to increase productivity and competitiveness through the integration of CAM technology with CAD software. Carpentry companies may lower labor costs and boost production capacity by automating the fabrication process, giving them a competitive advantage. This study offers implementation-ready and representative recommendations for successful implementation as well as important insights into the equipment and technologies available for automated fabrication in the Ghanaian carpentry sector.

Keywords: carpentry, computer-aided manufacturing (CAM), Ghana, information technology(IT)

Procedia PDF Downloads 98
192 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 60
191 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence

Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti

Abstract:

In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.

Keywords: collective intelligence, data framework, destination management, smart tourism

Procedia PDF Downloads 122
190 Principles for the Realistic Determination of the in-situ Concrete Compressive Strength under Consideration of Rearrangement Effects

Authors: Rabea Sefrin, Christian Glock, Juergen Schnell

Abstract:

The preservation of existing structures is of great economic interest because it contributes to higher sustainability and resource conservation. In the case of existing buildings, in addition to repair and maintenance, modernization or reconstruction works often take place in the course of adjustments or changes in use. Since the structural framework and the associated load level are usually changed in the course of the structural measures, the stability of the structure must be verified in accordance with the currently valid regulations. The concrete compressive strength of the existing structures concrete and the derived mechanical parameters are of central importance for the recalculation and verification. However, the compressive strength of the existing concrete is usually set comparatively low and thus underestimated. The reasons for this are too small numbers, and large scatter of material properties of the drill cores, which are used for the experimental determination of the design value of the compressive strength. Within a structural component, the load is usually transferred over the area with higher stiffness and consequently with higher compressive strength. Therefore, existing strength variations within a component only play a subordinate role due to rearrangement effects. This paper deals with the experimental and numerical determination of such rearrangement effects in order to calculate the concrete compressive strength of existing structures more realistic and economical. The influence of individual parameters such as the specimen geometry (prism or cylinder) or the coefficient of variation of the concrete compressive strength is analyzed in experimental small-part tests. The coefficients of variation commonly used in practice are adjusted by dividing the test specimens into several layers consisting of different concretes, which are monolithically connected to each other. From each combination, a sufficient number of the test specimen is produced and tested to enable evaluation on a statistical basis. Based on the experimental tests, FE simulations are carried out to validate the test results. In the frame of a subsequent parameter study, a large number of combinations is considered, which had not been investigated in the experimental tests yet. Thus, the influence of individual parameters on the size and characteristic of the rearrangement effect is determined and described more detailed. Based on the parameter study and the experimental results, a calculation model for a more realistic determination of the in situ concrete compressive strength is developed and presented. By considering rearrangement effects in concrete during recalculation, a higher number of existing structures can be maintained without structural measures. The preservation of existing structures is not only decisive from an economic, sustainable, and resource-saving point of view but also represents an added value for cultural and social aspects.

Keywords: existing structures, in-situ concrete compressive strength, rearrangement effects, recalculation

Procedia PDF Downloads 120
189 Developing Early Intervention Tools: Predicting Academic Dishonesty in University Students Using Psychological Traits and Machine Learning

Authors: Pinzhe Zhao

Abstract:

This study focuses on predicting university students' cheating tendencies using psychological traits and machine learning techniques. Academic dishonesty is a significant issue that compromises the integrity and fairness of educational institutions. While much research has been dedicated to detecting cheating behaviors after they have occurred, there is limited work on predicting such tendencies before they manifest. The aim of this research is to develop a model that can identify students who are at higher risk of engaging in academic misconduct, allowing for earlier interventions to prevent such behavior. Psychological factors are known to influence students' likelihood of cheating. Research shows that traits such as test anxiety, moral reasoning, self-efficacy, and achievement motivation are strongly linked to academic dishonesty. High levels of anxiety may lead students to cheat as a way to cope with pressure. Those with lower self-efficacy are less confident in their academic abilities, which can push them toward dishonest behaviors to secure better outcomes. Students with weaker moral judgment may also justify cheating more easily, believing it to be less wrong under certain conditions. Achievement motivation also plays a role, as students driven primarily by external rewards, such as grades, are more likely to cheat compared to those motivated by intrinsic learning goals. In this study, data on students’ psychological traits is collected through validated assessments, including scales for anxiety, moral reasoning, self-efficacy, and motivation. Additional data on academic performance, attendance, and engagement in class are also gathered to create a more comprehensive profile. Using machine learning algorithms such as Random Forest, Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks, the research builds models that can predict students’ cheating tendencies. These models are trained and evaluated using metrics like accuracy, precision, recall, and F1 scores to ensure they provide reliable predictions. The findings demonstrate that combining psychological traits with machine learning provides a powerful method for identifying students at risk of cheating. This approach allows for early detection and intervention, enabling educational institutions to take proactive steps in promoting academic integrity. The predictive model can be used to inform targeted interventions, such as counseling for students with high test anxiety or workshops aimed at strengthening moral reasoning. By addressing the underlying factors that contribute to cheating behavior, educational institutions can reduce the occurrence of academic dishonesty and foster a culture of integrity. In conclusion, this research contributes to the growing body of literature on predictive analytics in education. It offers a approach by integrating psychological assessments with machine learning to predict cheating tendencies. This method has the potential to significantly improve how academic institutions address academic dishonesty, shifting the focus from punishment after the fact to prevention before it occurs. By identifying high-risk students and providing them with the necessary support, educators can help maintain the fairness and integrity of the academic environment.

Keywords: academic dishonesty, cheating prediction, intervention strategies, machine learning, psychological traits, academic integrity

Procedia PDF Downloads 23
188 Growth Performance and Intestinal Morphology of Isa Brown Pullet Chicks Fed Diets Containing Turmeric and Clove

Authors: Ayoola Doris Ayodele, Grace Oluwatoyin Tayo, Martha Dupe Olumide, Opeyemi Arinola Ajayi, Ayodeji Taofeek Ayo-Bello

Abstract:

Antibiotics have been widely used in animal nutrition to improve growth performance and health worldwide for many decades. However, there are rising concerns on the negative impact of dependence on antibiotic growth promoters (AGP) to improve animal performance despite its tremendous use. The need to improve performance in poultry production creates demand for natural alternative sources. Phytogenic feed additives (PFA) are plant-derived natural bioactive compounds that could be incorporated into animal feed to enhance livestock productivity. The effect of Turmeric, clove and turmeric + clove as feed additive was evaluated on performance and intestinal morphology of egg type chickens. 504- fifteen day old Isa brown chicks were weighed and randomly distributed to nine dietary treatments by a 3 x 3 factorial arrangement (test ingredient x inclusion level) in a completely randomized design, with four replicates of 14 birds each. The birds were fed Chick starter diet containing (2800 kcal/kg ME; 20.8% CP). Dietary treatments were Group 1 (T1- basal diet with 0% Turmeric inclusion), (T2- basal diet with 1% Turmeric inclusion), (T3- basal diet with 2% Turmeric inclusion). Group 2 (T4- basal diet with 0% clove inclusion), (T5- basal diet with 1% clove inclusion), (T6- basal diet with 2% clove inclusion). Group 3, turmeric + clove combination on 1:1 ratio weight for weight (T7- basal diet with 0% turmeric + 0% clove inclusion), (T8- basal diet with 0.5% turmeric + 0.5 clove% inclusion), (T9- basal diet with 1% turmeric + 1% clove inclusion). Performance parameters were evaluated throughout the experiment. The experiment spanned from day 15 to 56. Data were analyzed using Analysis of Variance (ANOVA) followed by Duncan’s Multiple Range Test with significance of P≤ 0.05. Significant differences (P>0.05) were not observed in final body weight, weight gain, feed intake and FCR among birds fed with diets containing across the treatments. However, birds fed with test ingredients showed higher numerical values in final body weight and weight gain when compared to the birds without additive. Birds on T8 had the highest final body weight value of 617.33 g and low values in all the control treatments (T1 -588 g, T4- 572 g and T7 -584 g). At day 56, intestinal samples were taken from the jejunum and ileum to evaluate the villus height, crypt depth and villus: crypt depth ratio. Addition of turmeric, clove and turmeric + clove in the diet produced significant (P< 0.05) effect on Jejunum and ileum of birds. Therefore, Turmeric and clove can be used as feed additives for pullet birds because they have a positive effect on growth performance and intestinal morphology of pullet chicks.

Keywords: clove, intestinal morphology, isa brown chicks, performance, turmeric

Procedia PDF Downloads 161
187 Computational Analysis of Thermal Degradation in Wind Turbine Spars' Equipotential Bonding Subjected to Lightning Strikes

Authors: Antonio A. M. Laudani, Igor O. Golosnoy, Ole T. Thomsen

Abstract:

Rotor blades of large, modern wind turbines are highly susceptible to downward lightning strikes, as well as to triggering upward lightning; consequently, it is necessary to equip them with an effective lightning protection system (LPS) in order to avoid any damage. The performance of existing LPSs is affected by carbon fibre reinforced polymer (CFRP) structures, which lead to lightning-induced damage in the blades, e.g. via electrical sparks. A solution to prevent internal arcing would be to electrically bond the LPS and the composite structures such that to obtain the same electric potential. Nevertheless, elevated temperatures are achieved at the joint interfaces because of high contact resistance, which melts and vaporises some of the epoxy resin matrix around the bonding. The produced high-pressure gasses open up the bonding and can ignite thermal sparks. The objective of this paper is to predict the current density distribution and the temperature field in the adhesive joint cross-section, in order to check whether the resin pyrolysis temperature is achieved and any damage is expected. The finite element method has been employed to solve both the current and heat transfer problems, which are considered weakly coupled. The mathematical model for electric current includes Maxwell-Ampere equation for induced electric field solved together with current conservation, while the thermal field is found from heat diffusion equation. In this way, the current sub-model calculates Joule heat release for a chosen bonding configuration, whereas the thermal analysis allows to determining threshold values of voltage and current density not to be exceeded in order to maintain the temperature across the joint below the pyrolysis temperature, therefore preventing the occurrence of outgassing. In addition, it provides an indication of the minimal number of bonding points. It is worth to mention that the numerical procedures presented in this study can be tailored and applied to any type of joints other than adhesive ones for wind turbine blades. For instance, they can be applied for lightning protection of aerospace bolted joints. Furthermore, they can even be customized to predict the electromagnetic response under lightning strikes of other wind turbine systems, such as nacelle and hub components.

Keywords: carbon fibre reinforced polymer, equipotential bonding, finite element method, FEM, lightning protection system, LPS, wind turbine blades

Procedia PDF Downloads 164
186 Enhancing Seismic Resilience in Urban Environments

Authors: Beatriz González-rodrigo, Diego Hidalgo-leiva, Omar Flores, Claudia Germoso, Maribel Jiménez-martínez, Laura Navas-sánchez, Belén Orta, Nicola Tarque, Orlando Hernández- Rubio, Miguel Marchamalo, Juan Gregorio Rejas, Belén Benito-oterino

Abstract:

Cities facing seismic hazard necessitate detailed risk assessments for effective urban planning and vulnerability identification, ensuring the safety and sustainability of urban infrastructure. Comprehensive studies involving seismic hazard, vulnerability, and exposure evaluations are pivotal for estimating potential losses and guiding proactive measures against seismic events. However, broad-scale traditional risk studies limit consideration of specific local threats and identify vulnerable housing within a structural typology. Achieving precise results at neighbourhood levels demands higher resolution seismic hazard exposure, and vulnerability studies. This research aims to bolster sustainability and safety against seismic disasters in three Central American and Caribbean capitals. It integrates geospatial techniques and artificial intelligence into seismic risk studies, proposing cost-effective methods for exposure data collection and damage prediction. The methodology relies on prior seismic threat studies in pilot zones, utilizing existing exposure and vulnerability data in the region. Emphasizing detailed building attributes enables the consideration of behaviour modifiers affecting seismic response. The approach aims to generate detailed risk scenarios, facilitating prioritization of preventive actions pre-, during, and post-seismic events, enhancing decision-making certainty. Detailed risk scenarios necessitate substantial investment in fieldwork, training, research, and methodology development. Regional cooperation becomes crucial given similar seismic threats, urban planning, and construction systems among involved countries. The outcomes hold significance for emergency planning and national and regional construction regulations. The success of this methodology depends on cooperation, investment, and innovative approaches, offering insights and lessons applicable to regions facing moderate seismic threats with vulnerable constructions. Thus, this framework aims to fortify resilience in seismic-prone areas and serves as a reference for global urban planning and disaster management strategies. In conclusion, this research proposes a comprehensive framework for seismic risk assessment in high-risk urban areas, emphasizing detailed studies at finer resolutions for precise vulnerability evaluations. The approach integrates regional cooperation, geospatial technologies, and adaptive fragility curve adjustments to enhance risk assessment accuracy, guiding effective mitigation strategies and emergency management plans.

Keywords: assessment, behaviour modifiers, emergency management, mitigation strategies, resilience, vulnerability

Procedia PDF Downloads 69
185 Finite Element Analysis of the Drive Shaft and Jacking Frame Interaction in Micro-Tunneling Method: Case Study of Tehran Sewerage

Authors: B. Mohammadi, A. Riazati, P. Soltan Sanjari, S. Azimbeik

Abstract:

The ever-increasing development of civic demands on one hand; and the urban constrains for newly establish of infrastructures, on the other hand, perforce the engineering committees to apply non-conflicting methods in order to optimize the results. One of these optimized procedures to establish the main sewerage networks is the pipe jacking and micro-tunneling method. The raw information and researches are based on the experiments of the slurry micro-tunneling project of the Tehran main sewerage network that it has executed by the KAYSON co. The 4985 meters route of the mentioned project that is located nearby the Azadi square and the most vital arteries of Tehran is faced to 45% physical progress nowadays. The boring machine is made by the Herrenknecht and the diameter of the using concrete-polymer pipes are 1600 and 1800 millimeters. Placing and excavating several shafts on the ground and direct Tunnel boring between the axes of issued shafts is one of the requirements of the micro-tunneling. Considering the stream of the ground located shafts should care the hydraulic circumstances, civic conditions, site geography, traffic cautions and etc. The profile length has to convert to many shortened segment lines so the generated angle between the segments will be based in the manhole centers. Each segment line between two continues drive and receive the shaft, displays the jack location, driving angle and the path straight, thus, the diversity of issued angle causes the variety of jack positioning in the shaft. The jacking frame fixing conditions and it's associated dynamic load direction produces various patterns of Stress and Strain distribution and creating fatigues in the shaft wall and the soil surrounded the shaft. This pattern diversification makes the shaft wall transformed, unbalanced subsidence and alteration in the pipe jacking Stress Contour. This research is based on experiments of the Tehran's west sewerage plan and the numerical analysis the interaction of the soil around the shaft, shaft walls and the Jacking frame direction and finally, the suitable or unsuitable location of the pipe jacking shaft will be determined.

Keywords: underground structure, micro-tunneling, fatigue analysis, dynamic-soil–structure interaction, underground water, finite element analysis

Procedia PDF Downloads 320
184 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level

Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni

Abstract:

In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.

Keywords: tropocollagen, multiscale model, fibrils, knee ligaments

Procedia PDF Downloads 129
183 Wind Direction and Its Linkage with Vibrio cholerae Dissemination

Authors: Shlomit Paz, Meir Broza

Abstract:

Cholera is an acute intestinal infection caused by ingestion of food or water contaminated with the bacterium Vibrio cholerae. It has a short incubation period and produces an enterotoxin that causes copious, painless, watery diarrhoea that can quickly lead to severe dehydration and death if treatment is not promptly given. In an epidemic, the source of the contamination is usually the feces of an infected person. The disease can spread rapidly in areas with poor treatment of sewage and drinking water. Cholera remains a global threat and is one of the key indicators of social development. An estimated 3-5 million cases and over 100,000 deaths occur each year around the world. The relevance of climatic events as causative factors for cholera epidemics is well known. However, the examination of the involvement of winds in intra-continental disease distribution is new. The study explore the hypothesis that the spreading of cholera epidemics may be related to the dominant wind direction over land by presenting the influence of the wind direction on windborn dissemination by flying insects, which may serve as vectors. Chironomids ("non-biting midges“) exist in the majority of freshwater aquatic habitats, especially in estuarine and organic-rich water bodies typical to Vibrio cholerae. Chironomid adults emerge into the air for mating and dispersion. They are highly mobile, huge in number and found frequently in the air at various elevations. The huge number of chironomid egg masses attached to hard substrate on the water surface, serve as a reservoir for the free-living Vibrio bacteria. Both male and female, while emerging from the water, may carry the cholera bacteria. In experimental simulation, it was demonstrated that the cholera-bearing adult midges are carried by the wind, and transmit the bacteria from one body of water to another. In our previous study, the geographic diffusions of three cholera outbreaks were examined through their linkage with the wind direction: a) the progress of Vibrio cholerae O1 biotype El Tor in Africa during 1970–1971 and b) again in 2005–2006; and c) the rapid spread of Vibrio cholerae O139 over India during 1992–1993. Using data and map of cholera dissemination (WHO database) and mean monthly SLP and geopotential data (NOAA NCEP-NCAR database), analysis of air pressure data at sea level and at several altitudes over Africa, India and Bangladesh show a correspondence between the dominant wind direction and the intra-continental spread of cholera. The results support the hypothesis that aeroplankton (the tiny life forms that float in the air and that may be caught and carried upward by the wind, landing far from their origin) carry the cholera bacteria from one body of water to an adjacent one. In addition to these findings, the current follow-up study will present new results regarding the possible involvement of winds in the spreading of cholera in recent outbreaks (2010-2013). The findings may improve the understanding of how climatic factors are involved in the rapid distribution of new strains throughout a vast continental area. Awareness of the aerial transfer of Vibrio cholerae may assist health authorities by improving the prediction of the disease’s geographic dissemination.

Keywords: cholera, Vibrio cholerae, wind direction, Vibrio cholerae dissemination

Procedia PDF Downloads 367
182 Subway Ridership Estimation at a Station-Level: Focus on the Impact of Bus Demand, Commercial Business Characteristics and Network Topology

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The primary purpose of this study is to develop a methodological framework to predict daily subway ridership at a station-level and to examine the association between subway ridership and bus demand incorporating commercial business facility in the vicinity of each subway station. The socio-economic characteristics, land-use, and built environment as factors may have an impact on subway ridership. However, it should be considered not only the endogenous relationship between bus and subway demand but also the characteristics of commercial business within a subway station’s sphere of influence, and integrated transit network topology. Regarding a statistical approach to estimate subway ridership at a station level, therefore it should be considered endogeneity and heteroscedastic issues which might have in the subway ridership prediction model. This study focused on both discovering the impacts of bus demand, commercial business characteristics, and network topology on subway ridership and developing more precise subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers entire Seoul city in South Korea and includes 243 stations with the temporal scope set at twenty-four hours with one-hour interval time panels each. The data for subway and bus ridership was collected Seoul Smart Card data from 2015 and 2016. Three-Stage Least Square(3SLS) approach was applied to develop daily subway ridership model as capturing the endogeneity and heteroscedasticity between bus and subway demand. Independent variables incorporating in the modeling process were commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. As a result, it was found that bus ridership and subway ridership were endogenous each other and they had a significantly positive sign of coefficients which means one transit mode could increase another transportation mode’s ridership. In other words, two transit modes of subway and bus have a mutual relationship instead of the competitive relationship. The commercial business characteristics are the most critical dimension among the independent variables. The variables of commercial business facility rate in the paper containing six types; medical, educational, recreational, financial, food service, and shopping. From the model result, a higher rate in medical, financial buildings, shopping, and food service facility lead to increment of subway ridership at a station, while recreational and educational facility shows lower subway ridership. The complex network theory was applied for estimating integrated network topology measures that cover the entire Seoul transit network system, and a framework for seeking an impact on subway ridership. The centrality measures were found to be significant and showed a positive sign indicating higher centrality led to more subway ridership at a station level. The results of model accuracy tests by out of samples provided that 3SLS model has less mean square error rather than OLS and showed the methodological approach for the 3SLS model was plausible to estimate more accurate subway ridership. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (2017R1C1B2010175).

Keywords: subway ridership, bus ridership, commercial business characteristic, endogeneity, network topology

Procedia PDF Downloads 145
181 Prismatic Bifurcation Study of a Functionally Graded Dielectric Elastomeric Tube Using Linearized Incremental Theory of Deformations

Authors: Sanjeet Patra, Soham Roychowdhury

Abstract:

In recent times, functionally graded dielectric elastomer (FGDE) has gained significant attention within the realm of soft actuation due to its dual capacity to exert highly localized stresses while maintaining its compliant characteristics on application of electro-mechanical loading. Nevertheless, the full potential of dielectric elastomer (DE) has not been fully explored due to their susceptibility to instabilities when subjected to electro-mechanical loads. As a result, study and analysis of such instabilities becomes crucial for the design and realization of dielectric actuators. Prismatic bifurcation is a type of instability that has been recognized in a DE tube. Though several studies have reported on the analysis for prismatic bifurcation in an isotropic DE tube, there is an insufficiency in studies related to prismatic bifurcation of FGDE tubes. Therefore, this paper aims to determine the onset of prismatic bifurcations on an incompressible FGDE tube when subjected to electrical loading across the thickness of the tube and internal pressurization. The analysis has been conducted by imposing two axial boundary conditions on the tube, specifically axially free ends and axially clamped ends. Additionally, the rigidity modulus of the tube has been linearly graded in the direction of thickness where the inner surface of the tube has a lower stiffness than the outer surface. The static equilibrium equations for deformation of the axisymmetric tube are derived and solved using numerical technique. The condition for prismatic bifurcation of the axisymmetric static equilibrium solutions has been obtained by using the linearized incremental constitutive equations. Two modes of bifurcations, corresponding to two different non-circular cross-sectional geometries, have been explored in this study. The outcomes reveal that the FGDE tubes experiences prismatic bifurcation before the Hessian criterion of failure is satisfied. It is observed that the lower mode of bifurcation can be triggered at a lower critical voltage as compared to the higher mode of bifurcation. Furthermore, the tubes with larger stiffness gradient require higher critical voltages for triggering the bifurcation. Moreover, with the increase in stiffness gradient, a linear variation of the critical voltage is observed with the thickness of the tube. It has been found that on applying internal pressure to a tube with low thickness, the tube becomes less susceptible to bifurcations. A thicker tube with axially free end is found to be more stable than the axially clamped end tube at higher mode of bifurcation.

Keywords: critical voltage, functionally graded dielectric elastomer, linearized incremental approach, modulus of rigidity, prismatic bifurcation

Procedia PDF Downloads 80
180 A Data-Driven Optimal Control Model for the Dynamics of Monkeypox in a Variable Population with a Comprehensive Cost-Effectiveness Analysis

Authors: Martins Onyekwelu Onuorah, Jnr Dahiru Usman

Abstract:

Introduction: In the realm of public health, the threat posed by Monkeypox continues to elicit concern, prompting rigorous studies to understand its dynamics and devise effective containment strategies. Particularly significant is its recurrence in variable populations, such as the observed outbreak in Nigeria in 2022. In light of this, our study undertakes a meticulous analysis, employing a data-driven approach to explore, validate, and propose optimized intervention strategies tailored to the distinct dynamics of Monkeypox within varying demographic structures. Utilizing a deterministic mathematical model, we delved into the intricate dynamics of Monkeypox, with a particular focus on a variable population context. Our qualitative analysis provided insights into the disease-free equilibrium, revealing its stability when R0 is less than one and discounting the possibility of backward bifurcation, as substantiated by the presence of a single stable endemic equilibrium. The model was rigorously validated using real-time data from the Nigerian 2022 recorded cases for Epi weeks 1 – 52. Transitioning from qualitative to quantitative, we augmented our deterministic model with optimal control, introducing three time-dependent interventions to scrutinize their efficacy and influence on the epidemic's trajectory. Numerical simulations unveiled a pronounced impact of the interventions, offering a data-supported blueprint for informed decision-making in containing the disease. A comprehensive cost-effectiveness analysis employing the Infection Averted Ratio (IAR), Average Cost-Effectiveness Ratio (ACER), and Incremental Cost-Effectiveness Ratio (ICER) facilitated a balanced evaluation of the interventions’ economic and health impacts. In essence, our study epitomizes a holistic approach to understanding and mitigating Monkeypox, intertwining rigorous mathematical modeling, empirical validation, and economic evaluation. The insights derived not only bolster our comprehension of Monkeypox's intricate dynamics but also unveil optimized, cost-effective interventions. This integration of methodologies and findings underscores a pivotal stride towards aligning public health imperatives with economic sustainability, marking a significant contribution to global efforts in combating infectious diseases.

Keywords: monkeypox, equilibrium states, stability, bifurcation, optimal control, cost-effectiveness

Procedia PDF Downloads 88
179 Noninvasive Technique for Measurement of Heartbeat in Zebrafish Embryos Exposed to Electromagnetic Fields at 27 GHz

Authors: Sara Ignoto, Elena M. Scalisi, Carmen Sica, Martina Contino, Greta Ferruggia, Antonio Salvaggio, Santi C. Pavone, Gino Sorbello, Loreto Di Donato, Roberta Pecoraro, Maria V. Brundo

Abstract:

The new fifth generation technology (5G), which should favor high data-rate connections (1Gbps) and latency times lower than the current ones (<1ms), has the characteristic of working on different frequency bands of the radio wave spectrum (700 MHz, 3.6-3.8 GHz and 26.5-27.5 GHz), thus also exploiting higher frequencies than previous mobile radio generations (1G-4G). The higher frequency waves, however, have a lower capacity to propagate in free space and therefore, in order to guarantee the capillary coverage of the territory for high reliability applications, it will be necessary to install a large number of repeaters. Following the introduction of this new technology, there has been growing concern in recent years about the possible harmful effects on human health and several studies were published using several animal models. This study aimed to observe the possible short-term effects induced by 5G-millimeter waves on heartbeat of early life stages of Danio rerio using DanioScope software (Noldus). DanioScope is the complete toolbox for measurements on zebrafish embryos and larvae. The effect of substances can be measured on the developing zebrafish embryo by a range of parameters: earliest activity of the embryo’s tail, activity of the developing heart, speed of blood flowing through the vein, length and diameters of body parts. Activity measurements, cardiovascular data, blood flow data and morphometric parameters can be combined in one single tool. Obtained data are elaborate and provided by the software both numerical as well as graphical. The experiments were performed at 27 GHz by a no commercial high gain pyramidal horn antenna. According to OECD guidelines, exposure to 5G-millimeter waves was tested by fish embryo toxicity test within 96 hours post fertilization, Observations were recorded every 24h, until the end of the short-term test (96h). The results have showed an increase of heartbeat rate on exposed embryos at 48h hpf than control group, but this increase has not been shown at 72-96 h hpf. Nowadays, there is a scant of literature data about this topic, so these results could be useful to approach new studies and also to evaluate potential cardiotoxic effects of mobile radiofrequency.

Keywords: Danio rerio, DanioScope, cardiotoxicity, millimeter waves.

Procedia PDF Downloads 168
178 Haematology and Reproductive Performance of Pubertal Rabbit Do Administer Crude Moringa oleifera (LAM.) Leaf Extract

Authors: Ewuola E. O., Sokunbi O. A., Oyedemi O. M., Sanni K. M

Abstract:

Moringa oleifera leaf has been traditionally used in the local medicine as an ingredient in some herbal formulations for blood purifier, cholesterol reducing agent, immune and reproductive enhancers. Twenty-four pubertal rabbit are divided equally into four groups were administered with varied concentrations of crude extract of the leaves of Moringa oleifera gavage at doses of 2.5ml/kg body weight (BW) in every 48 hours for 63 days. These rabbits were allotted into four treatments and each treatment was replicated six times to investigate the effect of administered crude Moringa oleifera leaf extract (CMOLE) on haematology and reproductive performance of pubertal rabbit does. Four experimental treatments were used. The animals on the control (T1) were administered water only. Rabbits on treatments 2, 3, and 4 were administered 100ml CMOLE/L, 200ml CMOLE/L, and 300ml CMOLE/L, respectively. The does were placed on extract two weeks before mating, five weeks after mating and continued for another two weeks after kindling. Six proven untreated bucks were used for the mating of the twenty-four treated does and these bucks were randomly allotted to the does such that each buck mated at least one treated does in each treatment. The same management practices and experimental diets were given ad libitum to all animals. Blood was sampled from the gestating does at the third trimester for haematological analysis. The haematology results showed that treated rabbits with 100ml CMOLE/L with mean corpuscular volume value of 93.38fl significantly (p < 0.05) higher than those on the control which is water only (82.24fl) but not significantly different from T3 (200ml CMOLE/L) and T4 (300ml CMOLE/L) which had mean values of 91.69fl and 91.49fl, respectively. While the erythrocyte counts, leukocyte counts, haematocrit, haemoglobin concentration, mean corpuscular haemoglobin, mean corpuscular haemoglobin concentration, lymphocyte, neutrophil, monocyte, and eosinophil count were not significantly different across the treatments. For platelets, treated animals on T2 (100ml CMOLE/L) had the highest numerical value of 148.80 x 109/L which was identical with those on T3 (200ml CMOLE/L) with mean value of 141.50x109/L but significantly (p < 0.05) higher than those on T4 (300ml CMOLE/L) with mean value of 135.00 x 109/L and those on the control which had the least mean value of 126.60 x 109/L. The percentage conception rate of the treated animals was higher than those in the control group. The animals administered 300ml CMOLE/L had the apparently highest litter size of 5.75, while gestation length and litter weight tended to decline with increase in CMOLE concentrations The investigation demonstrated the potential effect of crude Moringa oleifera leaf extract on pubertal rabbit does. The administration of up to 300ml crude Moringa oleifera leaf extract per liter did not adversely affect but improved the haematological response and reproductive potential in gestating rabbit does.

Keywords: conception, haematology, moringa leaf extract, rabbit does

Procedia PDF Downloads 512
177 Numerical Investigation of the Effects of Surfactant Concentrations on the Dynamics of Liquid-Liquid Interfaces

Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji

Abstract:

Theoretically, there exist two mathematical interfaces (fluid-solid and fluid-fluid) when a liquid film is present on solid surfaces. These interfaces overlap if the mineral surface is oil-wet or mixed wet, and therefore, the effects of disjoining pressure are significant on both boundaries. Hence, dewetting is a necessary process that could detach oil from the mineral surface. However, if the thickness of the thin water film directly in contact with the surface is large enough, disjoining pressure can be thought to be zero at the liquid-liquid interface. Recent studies show that the integration of fluid-fluid interactions with fluid-rock interactions is an important step towards a holistic approach to understanding smart water effects. Experiments have shown that the brine solution can alter the micro forces at oil-water interfaces, and these ion-specific interactions lead to oil emulsion formation. The natural emulsifiers present in crude oil behave as polyelectrolytes when the oil interfaces with low salinity water. Wettability alteration caused by low salinity waterflooding during Enhanced Oil Recovery (EOR) process results from the activities of divalent ions. However, polyelectrolytes are said to lose their viscoelastic property with increasing cation concentrations. In this work, the influence of cation concentrations on the dynamics of viscoelastic liquid-liquid interfaces is numerically investigated. The resultant ion concentrations at the crude oil/brine interfaces were estimated using a surface complexation model. Subsequently, the ion concentration parameter is integrated into a mathematical model to describe its effects on the dynamics of a viscoelastic interfacial thin film. The film growth, stability, and rupture were measured after different time steps for three types of fluids (Newtonian, purely elastic and viscoelastic fluids). The interfacial films respond to exposure time in a similar manner with an increasing growth rate, which resulted in the formation of more droplets with time. Increased surfactant accumulation at the interface results in a higher film growth rate which leads to instability and subsequent formation of more satellite droplets. Purely elastic and viscoelastic properties limit film growth rate and consequent film stability compared to the Newtonian fluid. Therefore, low salinity and reduced concentration of the potential determining ions in injection water will lead to improved interfacial viscoelasticity.

Keywords: liquid-liquid interfaces, surfactant concentrations, potential determining ions, residual oil mobilization

Procedia PDF Downloads 144
176 Solutions of Thickening the Sludge from the Wastewater Treatment by a Rotor with Bars

Authors: Victorita Radulescu

Abstract:

Introduction: The sewage treatment plants, in the second stage, are formed by tanks having as main purpose the formation of the suspensions with high possible solid concentration values. The paper presents a solution to produce a rapid concentration of the slurry and sludge, having as main purpose the minimization as much as possible the size of the tanks. The solution is based on a rotor with bars, tested into two different areas of industrial activity: the remediation of the wastewater from the oil industry and, in the last year, into the mining industry. Basic Methods: It was designed, realized and tested a thickening system with vertical bars that manages to reduce sludge moisture content from 94% to 87%. The design was based on the hypothesis that the streamlines of the vortices detached from the rotor with vertical bars accelerate, under certain conditions, the sludge thickening. It is moved at the lateral sides, and in time, it became sediment. The formed vortices with the vertical axis in the viscous fluid, under the action of the lift, drag, weight, and inertia forces participate at a rapid aggregation of the particles thus accelerating the sludge concentration. Appears an interdependence between the Re number attached to the flow with vortex induced by the vertical bars and the size of the hydraulic compaction phenomenon, resulting from an accelerated process of sedimentation, therefore, a sludge thickening depending on the physic-chemical characteristics of the resulting sludge is projected the rotor's dimensions. Major findings/ Results: Based on the experimental measurements was performed the numerical simulation of the hydraulic rotor, as to assure the necessary vortices. The experimental measurements were performed to determine the optimal height and the density of the bars for the sludge thickening system, to assure the tanks dimensions as small as possible. The time thickening/settling was reduced by 24% compared to the conventional used systems. In the present, the thickeners intend to decrease the intermediate stage of water treatment, using primary and secondary settling; but they assume a quite long time, the order of 10-15 hours. By using this system, there are no intermediary steps; the thickening is done automatically when are created the vortices. Conclusions: The experimental tests were carried out in the wastewater treatment plant of the Refinery of oil from Brazi, near the city Ploiesti. The results prove its efficiency in reducing the time for compacting the sludge and the smaller humidity of the evacuated sediments. The utilization of this equipment is now extended and it is tested the mining industry, with significant results, in Lupeni mine, from the Jiu Valley.

Keywords: experimental tests, hydrodynamic modeling, rotor efficiency, wastewater treatment

Procedia PDF Downloads 118
175 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers

Procedia PDF Downloads 63
174 Construction and Validation of Allied Bank-Teller Aptitude Test

Authors: Muhammad Kashif Fida

Abstract:

In the bank, teller’s job (cash officer) is highly important and critical as at one end it requires soft and brisk customer services and on the other side, handling cash with integrity. It is always challenging for recruiters to hire competent and trustworthy tellers. According to author’s knowledge, there is no comprehensive test available that may provide assistance in recruitment in Pakistan. So there is a dire need of a psychometric battery that could provide support in recruitment of potential candidates for the teller’ position. So, the aim of the present study was to construct ABL-Teller Aptitude Test (ABL-TApT). Three major phases have been designed by following American Psychological Association’s guidelines. The first phase was qualitative, indicators of the test have been explored by content analysis of the a) teller’s job descriptions (n=3), b) interview with senior tellers (n=6) and c) interview with HR personals (n=4). Content analysis of above yielded three border constructs; i). Personality, ii). Integrity/honesty, iii). Professional Work Aptitude. Identified indicators operationalized and statements (k=170) were generated using verbatim. It was then forwarded to the five experts for review of content validity. They finalized 156 items. In the second phase; ABL-TApT (k=156) administered on 323 participants through a computer application. The overall reliability of the test shows significant alpha coefficient (α=.81). Reliability of subscales have also significant alpha coefficients. Confirmatory Factor Analysis (CFA) performed to estimate the construct validity, confirms four main factors comprising of eight personality traits (Confidence, Organized, Compliance, Goal-oriented, Persistent, Forecasting, Patience, Caution), one Integrity/honesty factor, four factors of professional work aptitude (basic numerical ability and perceptual accuracy of letters, numbers and signature) and two factors for customer services (customer services, emotional maturity). Values of GFI, AGFI, NNFI, CFI, RFI and RMSEA are in recommended range depicting significant model fit. In third phase concurrent validity evidences have been pursued. Personality and integrity part of this scale has significant correlations with ‘conscientiousness’ factor of NEO-PI-R, reflecting strong concurrent validity. Customer services and emotional maturity have significant correlations with ‘Bar-On EQI’ showing another evidence of strong concurrent validity. It is concluded that ABL-TAPT is significantly reliable and valid battery of tests, will assist in objective recruitment of tellers and help recruiters in finding a more suitable human resource.

Keywords: concurrent validity, construct validity, content validity, reliability, teller aptitude test, objective recruitment

Procedia PDF Downloads 226
173 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 125
172 A Systematic Review of Efficacy and Safety of Radiofrequency Ablation in Patients with Spinal Metastases

Authors: Pascale Brasseur, Binu Gurung, Nicholas Halfpenny, James Eaton

Abstract:

Development of minimally invasive treatments in recent years provides a potential alternative to invasive surgical interventions which are of limited value to patients with spinal metastases due to short life expectancy. A systematic review was conducted to explore the efficacy and safety of radiofrequency ablation (RFA), a minimally invasive treatment in patients with spinal metastases. EMBASE, Medline and CENTRAL were searched from database inception to March 2017 for randomised controlled trials (RCTs) and non-randomised studies. Conference proceedings for ASCO and ESMO published in 2015 and 2016 were also searched. Fourteen studies were included: three prospective interventional studies, four prospective case series and seven retrospective case series. No RCTs or studies comparing RFA with another treatment were identified. RFA was followed by cement augmentation in all patients in seven studies and some patients (40-96%) in the remaining seven studies. Efficacy was assessed as pain relief in 13/14 studies with the use of a numerical rating scale (NRS) or a visual analogue scale (VAS) at various time points. Ten of the 13 studies reported a significant decrease in pain outcome, post-RFA compared to baseline. NRS scores improved significantly at 1 week (5.9 to 3.5, p < 0.0001; 8 to 4.3, p < 0.02 and 8 to 3.9, p < 0.0001) and this improvement was maintained at 1 month post-RFA compared to baseline (5.9 to 2.6, p < 0.0001; 8 to 2.9, p < 0.0003; 8 to 2.9, p < 0.0001). Similarly, VAS scores decreased significantly at 1 week (7.5 to 2.7, p=0.00005; 7.51 to 1.73, p < 0.0001; 7.82 to 2.82, p < 0.001) and this pattern was maintained at 1 month post-RFA compared to baseline (7.51 to 2.25, p < 0.0001; 7.82 to 3.3; p < 0.001). A significant pain relief was achieved regardless of whether patients had cement augmentation in two studies assessing the impact of RFA with or without cement augmentation on VAS pain scores. In these two studies, a significant decrease in pain scores was reported for patients receiving RFA alone and RFA+cement at 1 week (4.3 to 1.7. p=0.0004 and 6.6 to 1.7, p=0.003 respectively) and 15-36 months (7.9 to 4, p=0.008 and 7.6 to 3.5, p=0.005 respectively) after therapy. Few minor complications were reported and these included neural damage, radicular pain, vertebroplasty leakage and lower limb pain/numbness. In conclusion, the efficacy and safety of RFA were consistently positive between prospective and retrospective studies with reductions in pain and few procedural complications. However, the lack of control groups in the identified studies indicates the possibility of selection bias inherent in single arm studies. Controlled trials exploring efficacy and safety of RFA in patients with spinal metastases are warranted to provide robust evidence. The identified studies provide an initial foundation for such future trials.

Keywords: pain relief, radiofrequency ablation, spinal metastases, systematic review

Procedia PDF Downloads 173
171 Criticality of Adiabatic Length for a Single Branch Pulsating Heat Pipe

Authors: Utsav Bhardwaj, Shyama Prasad Das

Abstract:

To meet the extensive requirements of thermal management of the circuit card assemblies (CCAs), satellites, PCBs, microprocessors, any other electronic circuitry, pulsating heat pipes (PHPs) have emerged in the recent past as one of the best solutions technically. But industrial application of PHPs is still unexplored up to a large extent due to their poor reliability. There are several systems as well as operational parameters which not only affect the performance of an operating PHP, but also decide whether the PHP can operate sustainably or not. Functioning may completely be halted for some particular combinations of the values of system and operational parameters. Among the system parameters, adiabatic length is one of the important ones. In the present work, a simplest single branch PHP system with an adiabatic section has been considered. It is assumed to have only one vapour bubble and one liquid plug. First, the system has been mathematically modeled using film evaporation/condensation model, followed by the steps of recognition of equilibrium zone, non-dimensionalization and linearization. Then proceeding with a periodical solution of the linearized and reduced differential equations, stability analysis has been performed. Slow and fast variables have been identified, and averaging approach has been used for the slow ones. Ultimately, temporal evolution of the PHP is predicted by numerically solving the averaged equations, to know whether the oscillations are likely to sustain/decay temporally. Stability threshold has also been determined in terms of some non-dimensional numbers formed by different groupings of system and operational parameters. A combined analytical and numerical approach has been used, and it has been found that for each combination of all other parameters, there exists a maximum length of the adiabatic section beyond which the PHP cannot function at all. This length has been called as “Critical Adiabatic Length (L_ac)”. For adiabatic lengths greater than “L_ac”, oscillations are found to be always decaying sooner or later. Dependence of “L_ac” on some other parameters has also been checked and correlated at certain evaporator & condenser section temperatures. “L_ac” has been found to be linearly increasing with increase in evaporator section length (L_e), whereas the condenser section length (L_c) has been found to have almost no effect on it upto a certain limit. But at considerably large condenser section lengths, “L_ac” is expected to decrease with increase in “L_c” due to increased wall friction. Rise in static pressure (p_r) exerted by the working fluid reservoir makes “L_ac” rise exponentially whereas it increases cubically with increase in the inner diameter (d) of PHP. Physics of all such variations has been given a good insight too. Thus, a methodology for quantification of the critical adiabatic length for any possible set of all other parameters of PHP has been established.

Keywords: critical adiabatic length, evaporation/condensation, pulsating heat pipe (PHP), thermal management

Procedia PDF Downloads 227