Search results for: pressure reduce valve
878 Reduced Tillage and Bio-stimulant Application Can Improve Soil Microbial Enzyme Activity in a Dryland Cropping System
Authors: Flackson Tshuma, James Bennett, Pieter Andreas Swanepoel, Johan Labuschagne, Stephan van der Westhuizen, Francis Rayns
Abstract:
Amongst other things, tillage and synthetic agrochemicals can be effective methods of seedbed preparation and pest control. Nonetheless, frequent and intensive tillage and excessive application of synthetic agrochemicals, such as herbicides and insecticides, can reduce soil microbial enzyme activity. A decline in soil microbial enzyme activity can negatively affect nutrient cycling and crop productivity. In this study, the effects of four tillage treatments; continuous mouldboard plough; shallow tine-tillage to a depth of about 75 mm; no-tillage; and tillage rotation (involving shallow tine-tillage once every four years in rotation with three years of no-tillage), and two rates of synthetic agrochemicals (standard: with regular application of synthetic agrochemicals; and reduced: fewer synthetic agrochemicals in combination with bio-chemicals/ or bio-stimulants) on soil microbial enzyme activity were investigated between 2018 and 2020 in a typical Mediterranean climate zone in South Africa. Four different bio-stimulants applied contained: Trichoderma asperellum, fulvic acid, silicic acid, and Nereocystis luetkeana extracts, respectively. The study was laid out as a complete randomised block design with four replicated blocks. Each block had 14 plots, and each plot measured 50 m x 6 m. The study aimed to assess the combined impact of tillage practices and reduced rates of synthetic agrochemical application on soil microbial enzyme activity in a dryland cropping system. It was hypothesised that the application of bio-stimulants in combination with minimum soil disturbance will lead to a greater increase in microbial enzyme activity than the effect of applying either in isolation. Six soil cores were randomly and aseptically collected from each plot for microbial enzyme activity analysis from the 0-150 mm layer of a field trial under a dryland crop rotation system in the Swartland region. The activities of four microbial enzymes, β-glucosidase, acid phosphatase, alkaline phosphatase and urease, were assessed. The enzymes are essential for the cycling of glucose, phosphorus, and nitrogen, respectively. Microbial enzyme activity generally increased with a reduction of both tillage intensity and synthetic agrochemical application. The use of the mouldboard plough led to the least (P<0.05) microbial enzyme activity relative to the reduced tillage treatments, whereas the system with bio-stimulants (reduced synthetic agrochemicals) led to the highest (P<0.05) microbial enzyme activity relative to the standard systems. The application of bio-stimulants in combination with reduced tillage, particularly no-tillage, could be beneficial for enzyme activity in a dryland farming system.Keywords: bio-stimulants, soil microbial enzymes, synthetic agrochemicals, tillage
Procedia PDF Downloads 80877 Limiting Freedom of Expression to Fight Radicalization: The 'Silencing' of Terrorists Does Not Always Allow Rights to 'Speak Loudly'
Authors: Arianna Vedaschi
Abstract:
This paper addresses the relationship between freedom of expression, national security and radicalization. Is it still possible to talk about a balance between the first two elements? Or, due to the intrusion of the third, is it more appropriate to consider freedom of expression as “permanently disfigured” by securitarian concerns? In this study, both the legislative and the judicial level are taken into account and the comparative method is employed in order to provide the reader with a complete framework of relevant issues and a workable set of solutions. The analysis moves from the finding according to which the tension between free speech and national security has become a major issue in democratic countries, whose very essence is continuously endangered by the ever-changing and multi-faceted threat of international terrorism. In particular, a change in terrorist groups’ recruiting pattern, attracting more and more people by way of a cutting-edge communicative strategy, often employing sophisticated technology as a radicalization tool, has called on law-makers to modify their approach to dangerous speech. While traditional constitutional and criminal law used to punish speech only if it explicitly and directly incited the commission of a criminal action (“cause-effect” model), so-called glorification offences – punishing mere ideological support for terrorism, often on the web – are becoming commonplace in the comparative scenario. Although this is direct, and even somehow understandable, consequence of the impending terrorist menace, this research shows many problematic issues connected to such a preventive approach. First, from a predominantly theoretical point of view, this trend negatively impacts on the already blurred line between permissible and prohibited speech. Second, from a pragmatic point of view, such legislative tools are not always suitable to keep up with ongoing developments of both terrorist groups and their use of technology. In other words, there is a risk that such measures become outdated even before their application. Indeed, it seems hard to still talk about a proper balance: what was previously clearly perceived as a balancing of values (freedom of speech v. public security) has turned, in many cases, into a hierarchy with security at its apex. In light of these findings, this paper concludes that such a complex issue would perhaps be better dealt with through a combination of policies: not only criminalizing ‘terrorist speech,’ which should be relegated to a last resort tool, but acting at an even earlier stage, i.e., trying to prevent dangerous speech itself. This might be done by promoting social cohesion and the inclusion of minorities, so as to reduce the probability of people considering terrorist groups as a “viable option” to deal with the lack of identification within their social contexts.Keywords: radicalization, free speech, international terrorism, national security
Procedia PDF Downloads 197876 Numerical Optimization of Cooling System Parameters for Multilayer Lithium Ion Cell and Battery Packs
Authors: Mohammad Alipour, Ekin Esen, Riza Kizilel
Abstract:
Lithium-ion batteries are a commonly used type of rechargeable batteries because of their high specific energy and specific power. With the growing popularity of electric vehicles and hybrid electric vehicles, increasing attentions have been paid to rechargeable Lithium-ion batteries. However, safety problems, high cost and poor performance in low ambient temperatures and high current rates, are big obstacles for commercial utilization of these batteries. By proper thermal management, most of the mentioned limitations could be eliminated. Temperature profile of the Li-ion cells has a significant role in the performance, safety, and cycle life of the battery. That is why little temperature gradient can lead to great loss in the performances of the battery packs. In recent years, numerous researchers are working on new techniques to imply a better thermal management on Li-ion batteries. Keeping the battery cells within an optimum range is the main objective of battery thermal management. Commercial Li-ion cells are composed of several electrochemical layers each consisting negative-current collector, negative electrode, separator, positive electrode, and positive current collector. However, many researchers have adopted a single-layer cell to save in computing time. Their hypothesis is that thermal conductivity of the layer elements is so high and heat transfer rate is so fast. Therefore, instead of several thin layers, they model the cell as one thick layer unit. In previous work, we showed that single-layer model is insufficient to simulate the thermal behavior and temperature nonuniformity of the high-capacity Li-ion cells. We also studied the effects of the number of layers on thermal behavior of the Li-ion batteries. In this work, first thermal and electrochemical behavior of the LiFePO₄ battery is modeled with 3D multilayer cell. The model is validated with the experimental measurements at different current rates and ambient temperatures. Real time heat generation rate is also studied at different discharge rates. Results showed non-uniform temperature distribution along the cell which requires thermal management system. Therefore, aluminum plates with mini-channel system were designed to control the temperature uniformity. Design parameters such as channel number and widths, inlet flow rate, and cooling fluids are optimized. As cooling fluids, water and air are compared. Pressure drop and velocity profiles inside the channels are illustrated. Both surface and internal temperature profiles of single cell and battery packs are investigated with and without cooling systems. Our results show that using optimized Mini-channel cooling plates effectively controls the temperature rise and uniformity of the single cells and battery packs. With increasing the inlet flow rate, cooling efficiency could be reached up to 60%.Keywords: lithium ion battery, 3D multilayer model, mini-channel cooling plates, thermal management
Procedia PDF Downloads 163875 Crisis Management and Corporate Political Activism: A Qualitative Analysis of Online Reactions toward Tesla
Authors: Roxana D. Maiorescu-Murphy
Abstract:
In the US, corporations have recently embraced political stances in an attempt to respond to the external pressure exerted by activist groups. To date, research in this area remains in its infancy, and few studies have been conducted on the way stakeholder groups respond to corporate political advocacy in general and in the immediacy of such a corporate announcement in particular. The current study aims to fill in this research void. In addition, the study contributes to an emerging trajectory in the field of crisis management by focusing on the delineation between crises (unexpected events related to products and services) and scandals (crises that spur moral outrage). The present study looked at online reactions in the aftermath of Elon Musk’s endorsement of the Republican party on Twitter. Two data sets were collected from Twitter following two political endorsements made by Elon Musk on May 18, 2022, and June 15, 2022, respectively. The total sample of analysis stemming from the data two sets consisted of N=1,374 user comments written as a response to Musk’s initial tweets. Given the paucity of studies in the preceding research areas, the analysis employed a case study methodology, used in circumstances in which the phenomena to be studied had not been researched before. According to the case study methodology, which answers the questions of how and why a phenomenon occurs, this study responded to the research questions of how online users perceived Tesla and why they did so. The data were analyzed in NVivo by the use of the grounded theory methodology, which implied multiple exposures to the text and the undertaking of an inductive-deductive approach. Through multiple exposures to the data, the researcher ascertained the common themes and subthemes in the online discussion. Each theme and subtheme were later defined and labeled. Additional exposures to the text ensured that these were exhaustive. The results revealed that the CEO’s political endorsements triggered moral outrage, leading to Tesla’s facing a scandal as opposed to a crisis. The moral outrage revolved around the stakeholders’ predominant rejection of a perceived intrusion of an influential figure on a domain reserved for voters. As expected, Musk’s political endorsements led to polarizing opinions, and those who opposed his views engaged in online activism aimed to boycott the Tesla brand. These findings reveal that the moral outrage that characterizes a scandal requires communication practices that differ from those that practitioners currently borrow from the field of crisis management. Specifically, because scandals flourish in online settings, practitioners should regularly monitor stakeholder perceptions and address them in real-time. While promptness is essential when managing crises, it becomes crucial to respond immediately as a scandal is flourishing online. Finally, attempts should be made to distance a brand, its products, and its CEO from the latter’s political views.Keywords: crisis management, communication management, Tesla, corporate political activism, Elon Musk
Procedia PDF Downloads 91874 Comprehensive Approach to Control Virus Infection and Energy Consumption in An Occupant Classroom
Authors: SeyedKeivan Nateghi, Jan Kaczmarczyk
Abstract:
People nowadays spend most of their time in buildings. Accordingly, maintaining a good quality of indoor air is very important. New universal matters related to the prevalence of Covid-19 also highlight the importance of indoor air conditioning in reducing the risk of virus infection. Cooling and Heating of a house will provide a suitable zone of air temperature for residents. One of the significant factors in energy demand is energy consumption in the building. In general, building divisions compose more than 30% of the world's fundamental energy requirement. As energy demand increased, greenhouse effects emerged that caused global warming. Regardless of the environmental damage to the ecosystem, it can spread infectious diseases such as malaria, cholera, or dengue to many other parts of the world. With the advent of the Covid-19 phenomenon, the previous instructions to reduce energy consumption are no longer responsive because they increase the risk of virus infection among people in the room. Two problems of high energy consumption and coronavirus infection are opposite. A classroom with 30 students and one teacher in Katowice, Poland, considered controlling two objectives simultaneal. The probability of transmission of the disease is calculated from the carbon dioxide concentration of people. Also, in a certain period, the amount of energy consumption is estimated by EnergyPlus. The effect of three parameters of number, angle, and time or schedule of opening windows on the probability of infection transmission and energy consumption of the class were investigated. Parameters were examined widely to determine the best possible condition for simultaneous control of infection spread and energy consumption. The number of opening windows is discrete (0,3), and two other parameters are continuous (0,180) and (8 AM, 2 PM). Preliminary results show that changes in the number, angle, and timing of window openings significantly impact the likelihood of virus transmission and class energy consumption. The greater the number, tilt, and timing of window openings, the less likely the student will transmit the virus. But energy consumption is increasing. When all the windows were closed at all hours of the class, the energy consumption for the first day of January was only 0.2 megajoules. In comparison, the probability of transmitting the virus per person in the classroom is more than 45%. But when all windows were open at maximum angles during class, the chance of transmitting the infection was reduced to 0.35%. But the energy consumption will be 36 megajoules. Therefore, school classrooms need an optimal schedule to control both functions. In this article, we will present a suitable plan for the classroom with natural ventilation through windows to control energy consumption and the possibility of infection transmission at the same time.Keywords: Covid-19, energy consumption, building, carbon dioxide, energyplus
Procedia PDF Downloads 98873 Concentration of Droplets in a Transient Gas Flow
Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal
Abstract:
The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations
Procedia PDF Downloads 257872 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination
Authors: Gilberto Goracci, Fabio Curti
Abstract:
This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field
Procedia PDF Downloads 103871 Altered Proteostasis Contributes to Skeletal Muscle Atrophy during Chronic Hypobaric Hypoxia: An Insight into Signaling Mechanisms
Authors: Akanksha Agrawal, Richa Rathor, Geetha Suryakumar
Abstract:
Muscle represents about ¾ of the body mass, and a healthy muscular system is required for human performance. A healthy muscular system is dynamically balanced via the catabolic and anabolic process. High altitude associated hypoxia altered this redox balance via producing reactive oxygen and nitrogen species that ultimately modulates protein structure and function, hence, disrupts proteostasis or protein homeostasis. The mechanism by which proteostasis is clinched includes regulated protein translation, protein folding, and protein degradation machinery. Perturbation in any of these mechanisms could increase proteome imbalance in the cellular processes. Altered proteostasis in skeletal muscle is likely to be responsible for contributing muscular atrophy in response to hypoxia. Therefore, we planned to elucidate the mechanism involving altered proteostasis leading to skeletal muscle atrophy under chronic hypobaric hypoxia. Material and Methods-Male Sprague Dawley rats weighing about 200-220 were divided into five groups - Control (Normoxic animals), 1d, 3d, 7d and 14d hypobaric hypoxia exposed animals. The animals were exposed to simulated hypoxia equivalent to 282 torr pressure (equivalent to an altitude of 7620m, 8% oxygen) at 25°C. On completion of chronic hypobaric hypoxia (CHH) exposure, rats were sacrificed, muscle was excised and biochemical, histopathological and protein synthesis signaling were studied. Results-A number of changes were observed with the CHH exposure time period. ROS was increased significantly on 07 and 14 days which were attributed to protein oxidation via damaging muscle protein structure by oxidation of amino acids moiety. The oxidative damage to the protein further enhanced the various protein degradation pathways. Calcium activated cysteine proteases and other intracellular proteases participate in protein turnover in muscles. Therefore, we analysed calpain and 20S proteosome activity which were noticeably increased at CHH exposure as compared to control group representing enhanced muscle protein catabolism. Since inflammatory markers (myokines) affect protein synthesis and triggers degradation machinery. So, we determined inflammatory pathway regulated under hypoxic environment. Other striking finding of the study was upregulation of Akt/PKB translational machinery that was increased on CHH exposure. Akt, p-Akt, p70 S6kinase, and GSK- 3β expression were upregulated till 7d of CHH exposure. Apoptosis related markers, caspase-3, caspase-9 and annexin V was also increased on CHH exposure. Conclusion: The present study provides evidence of disrupted proteostasis under chronic hypobaric hypoxia. A profound loss of muscle mass is accompanied by the muscle damage leading to apoptosis and cell death under CHH. These cellular stress response pathways may play a pivotal role in hypobaric hypoxia induced skeletal muscle atrophy. Further research in these signaling pathways will lead to development of therapeutic interventions for amelioration of hypoxia induced muscle atrophy.Keywords: Akt/PKB translational machinery, chronic hypobaric hypoxia, muscle atrophy, protein degradation
Procedia PDF Downloads 268870 Work Related Musculoskeletal Disorder: A Case Study of Office Computer Users in Nigerian Content Development and Monitoring Board, Yenagoa, Bayelsa State, Nigeria
Authors: Tamadu Perry Egedegu
Abstract:
Rapid growth in the use of electronic data has affected both the employee and work place. Our experience shows that jobs that have multiple risk factors have a greater likelihood of causing Work Related Musculoskeletal Disorder (WRMSDs), depending on the duration, frequency and/or magnitude of exposure to each. The study investigated musculoskeletal disorder among office workers. Thus, it is important that ergonomic risk factors be considered in light of their combined effect in causing or contributing to WRMSDs. Fast technological growth in the use of electronic system; have affected both workers and the work environment. Awkward posture and long hours in front of these visual display terminals can result in work-related musculoskeletal disorders (WRMSD). The study shall contribute to the awareness creation on the causes and consequences of WRMSDs due to lack of ergonomics training. The study was conducted using an observational cross-sectional design. A sample of 109 respondents was drawn from the target population through purposive sampling method. The sources of data were both primary and secondary. Primary data were collected through questionnaires and secondary data were sourced from journals, textbooks, and internet materials. Questionnaires were the main instrument for data collection and were designed in a YES or NO format according to the study objectives. Content validity approval was used to ensure that the variables were adequately covered. The reliability of the instrument was done through test-retest method, yielding a reliability index at 0.84. The data collected from the field were analyzed with a descriptive statistics of chart, percentage and mean. The study found that the most affected body regions were the upper back, followed by the lower back, neck, wrist, shoulder and eyes, while the least affected body parts were the knee calf and the ankle. Furthermore, the prevalence of work-related 'musculoskeletal' malfunctioning was linked with long working hours (6 - 8 hrs.) per day, lack of back support on their seats, glare on the monitor, inadequate regular break, repetitive motion of the upper limbs, and wrist when using the computer. Finally, based on these findings some recommendations were made to reduce the prevalent of WRMSDs among office workers.Keywords: work related musculoskeletal disorder, Nigeria, office computer users, ergonomic risk factor
Procedia PDF Downloads 240869 A Left Testicular Cancer with Multiple Metastases Nursing Experience
Authors: Syue-Wen Lin
Abstract:
Objective:This article reviews the care experience of a 40-year-old male patient who underwent a thoracoscopic right lower lobectomy following a COVID-19 infection. His complex medical history included multiple metastases (lungs, liver, spleen, and left kidney) and lung damage from COVID-19, which complicated the weaning process from mechanical ventilation. The care involved managing cancer treatment, postoperative pain, wound care, and palliative care. Methods:Nursing care was provided from August 16 to August 17, 2024. Challenges included difficulty with sputum clearance, which exacerbated the patient's anxiety and fear of reintubation. Pain management strategies combined analgesic drugs, non-drug methods, essential oil massages with family members, and playing the patient’s favorite music to reduce pain and anxiety. Progressive rehabilitation began with stabilizing vital signs, followed by assistance with sitting on the edge of the bed and walking within the ward. Strict sterile procedures and advanced wound care technology were used for daily dressing changes, with meticulous documentation of wound conditions and appropriate dressing selection. Holistic cancer care and palliative measures were integrated to address the patient’s physical and psychological needs. Results:The interdisciplinary care team developed a comprehensive plan addressing both physical and psychological aspects. Respiratory therapy, lung expansion exercises, and a high-frequency chest wall oscillation vest facilitated sputum expulsion and assisted in weaning from mechanical ventilation. The integration of cancer care, pain management, wound care, and palliative care led to improved quality of life and recovery. The collaborative approach between nursing staff and family ensured that the patient received compassionate and effective care. Conclusion: The complex interplay of emergency surgery, COVID-19, and advanced cancer required a multifaceted care strategy. The care team’s approach, combining critical care with tailored cancer and palliative care, effectively improved the patient’s quality of life and facilitated recovery. The comprehensive care plan, developed with family collaboration, provided both high-quality medical care and compassionate support for the terminally ill patient.Keywords: multiple metastases, testicular cancer, palliative care, nursing experience
Procedia PDF Downloads 20868 Non-Invasive Characterization of the Mechanical Properties of Arterial Walls
Authors: Bruno RamaëL, GwenaëL Page, Catherine Knopf-Lenoir, Olivier Baledent, Anne-Virginie Salsac
Abstract:
No routine technique currently exists for clinicians to measure the mechanical properties of vascular walls non-invasively. Most of the data available in the literature come from traction or dilatation tests conducted ex vivo on native blood vessels. The objective of the study is to develop a non-invasive characterization technique based on Magnetic Resonance Imaging (MRI) measurements of the deformation of vascular walls under pulsating blood flow conditions. The goal is to determine the mechanical properties of the vessels by inverse analysis, coupling imaging measurements and numerical simulations of the fluid-structure interactions. The hyperelastic properties are identified using Solidworks and Ansys workbench (ANSYS Inc.) solving an optimization technique. The vessel of interest targeted in the study is the common carotid artery. In vivo MRI measurements of the vessel anatomy and inlet velocity profiles was acquired along the facial vascular network on a cohort of 30 healthy volunteers: - The time-evolution of the blood vessel contours and, thus, of the cross-section surface area was measured by 3D imaging angiography sequences of phase-contrast MRI. - The blood flow velocity was measured using a 2D CINE MRI phase contrast (PC-MRI) method. Reference arterial pressure waveforms were simultaneously measured in the brachial artery using a sphygmomanometer. The three-dimensional (3D) geometry of the arterial network was reconstructed by first creating an STL file from the raw MRI data using the open source imaging software ITK-SNAP. The resulting geometry was then transformed with Solidworks into volumes that are compatible with Ansys softwares. Tetrahedral meshes of the wall and fluid domains were built using the ANSYS Meshing software, with a near-wall mesh refinement method in the case of the fluid domain to improve the accuracy of the fluid flow calculations. Ansys Structural was used for the numerical simulation of the vessel deformation and Ansys CFX for the simulation of the blood flow. The fluid structure interaction simulations showed that the systolic and diastolic blood pressures of the common carotid artery could be taken as reference pressures to identify the mechanical properties of the different arteries of the network. The coefficients of the hyperelastic law were identified using Ansys Design model for the common carotid. Under large deformations, a stiffness of 800 kPa is measured, which is of the same order of magnitude as the Young modulus of collagen fibers. Areas of maximum deformations were highlighted near bifurcations. This study is a first step towards patient-specific characterization of the mechanical properties of the facial vessels. The method is currently applied on patients suffering from facial vascular malformations and on patients scheduled for facial reconstruction. Information on the blood flow velocity as well as on the vessel anatomy and deformability will be key to improve surgical planning in the case of such vascular pathologies.Keywords: identification, mechanical properties, arterial walls, MRI measurements, numerical simulations
Procedia PDF Downloads 317867 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization
Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford
Abstract:
The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator
Procedia PDF Downloads 130866 Sustainable Urbanism: Model for Social Equity through Sustainable Development
Authors: Ruchira Das
Abstract:
The major Metropolises of India are resultant of Colonial manifestation of Production, Consumption and Sustenance. These cities grew, survived, and sustained on the basic whims of Colonial Power and Administrative Agendas. They were symbols of power, authority and administration. Within them some Colonial Towns remained as small towns within the close vicinity of the major metropolises and functioned as self–sufficient units until peripheral development due to tremendous pressure occurred in the metropolises. After independence huge expansion in Judiciary and Administration system resulted City Oriented Employment. A large number of people started residing within the city or within commutable distance of the city and it accelerated expansion of the cities. Since then Budgetary and Planning expenditure brought a new pace in Economic Activities. Investment in Industry and Agriculture sector generated opportunity of employment which further led towards urbanization. After two decades of Budgetary and Planning economic activities in India, a new era started in metropolitan expansion. Four major metropolises started further expansion rapidly towards its suburbs. A concept of large Metropolitan Area developed. Cities became nucleus of suburbs and rural areas. In most of the cases such expansion was not favorable to the relationship between City and its hinterland due to absence of visualization of Compact Sustainable Development. The search for solutions needs to weigh the choices between Rural and Urban based development initiatives. Policymakers need to focus on areas which will give the greatest impact. The impact of development initiatives will spread the significant benefit to all. There is an assumption that development integrates Economic, Social and Environmental considerations with equal weighing. The traditional narrower and almost exclusive focus on economic criteria as the determinant of the level of development is thus re–described and expanded. The Social and Environmental aspects are equally important as Economic aspect to achieve Sustainable Development. The arrangement of opportunities for Public, Semi – Public facilities for its citizen is very much relevant to development. It is responsibility of the administration to provide opportunities for the basic requirement of its inhabitants. Development should be in terms of both Industrial and Agricultural to maintain a balance between city and its hinterland. Thus, policy is to formulate shifting the emphasis away from Economic growth towards Sustainable Human Development. The goal of Policymaker should aim at creating environments in which people’s capabilities can be enhanced by the effective dynamic and adaptable policy. The poverty could not be eradicated simply by increasing income. The improvement of the condition of the people would have to lead to an expansion of basic human capabilities. In this scenario the suburbs/rural areas are considered as environmental burden to the metropolises. A new living has to be encouraged in the suburban or rural. We tend to segregate agriculture from the city and city life, this leads to over consumption, but this urbanism model attempts both these to co–exists and hence create an interesting overlapping of production and consumption network towards sustainable Rurbanism.Keywords: socio–economic progress, sustainability, social equity, urbanism
Procedia PDF Downloads 305865 Influence of La0.1Sr0.9Co1-xFexO3-δ Catalysts on Oxygen Permeation Using Mixed Conductor
Authors: Y. Muto, S. Araki, H. Yamamoto
Abstract:
The separation of oxygen is one key technology to improve the efficiency and to reduce the cost for the processed of the partial oxidation of the methane and the condensation of the carbon dioxide. Particularly, carbon dioxide at high concentration would be obtained by the combustion using pure oxygen separated from air. However, the oxygen separation process occupied the large part of energy consumption. Therefore, it is considered that the membrane technologies enable to separation at lower cost and lower energy consumption than conventional methods. In this study, it is examined that the separation of oxygen using membranes of mixed conductors. Oxygen permeation through the membrane is occurred by the following three processes. At first, the oxygen molecules dissociate into oxygen ion at feed side of the membrane, subsequently, oxygen ions diffuse in the membrane. Finally, oxygen ions recombine to form the oxygen molecule. Therefore, it is expected that the membrane of thickness and material, or catalysts of the dissociation and recombination affect the membrane performance. However, there is little article about catalysts for the dissociation and recombination. We confirmed the performance of La0.6Sr0.4Co1.0O3-δ (LSC) based catalyst which was commonly used as the dissociation and recombination. It is known that the adsorbed amount of oxygen increase with the increase of doped Fe content in B site of LSC. We prepared the catalysts of La0.1Sr0.9Co0.9Fe0.1O3-δ(C9F1), La0.1Sr0.9Co0.5Fe0.5O3-δ(C5F5) and La0.1Sr0.9Co0.3Fe0.7O3-δ(C7F3). Also, we used Pr2NiO4 type mixed conductor as a membrane material. (Pr0.9La0.1)2(Ni0.74Cu0.21Ga0.05)O4+δ(PLNCG) shows the high oxygen permeability and the stability against carbon dioxide. Oxygen permeation experiments were carried out using a homemade apparatus at 850 -975 °C. The membrane was sealed with Pyrex glass at both end of the outside dense alumina tubes. To measure the oxygen permeation rate, air was fed to the film side at 50 ml min-1, helium as the sweep gas and reference gas was fed at 20 ml min-1. The flow rates of the sweep gas and the gas permeated through the membrane were measured using flow meter and the gas concentrations were determined using a gas chromatograph. Then, the permeance of the oxygen was determined using the flow rate and the concentration of the gas on the permeate side of the membrane. The increase of oxygen permeation was observed with increasing temperature. It is considered that this is due to the catalytic activities are increased with increasing temperature. Another reason is the increase of oxygen diffusivity in the bulk of membrane. The oxygen permeation rate is improved by using catalyst of LSC or LSCF. The oxygen permeation rate of membrane with LSCF showed higher than that of membrane with LSC. Furthermore, in LSCF catalysts, oxygen permeation rate increased with the increase of the doped amount of Fe. It is considered that this is caused by the increased of adsorbed amount of oxygen.Keywords: membrane separation, oxygen permeation, K2NiF4-type structure, mixed conductor
Procedia PDF Downloads 518864 Proactive SoC Balancing of Li-ion Batteries for Automotive Application
Authors: Ali Mashayekh, Mahdiye Khorasani, Thomas weyh
Abstract:
The demand for battery electric vehicles (BEV) is steadily increasing, and it can be assumed that electric mobility will dominate the market for individual transportation in the future. Regarding BEVs, the focus of state-of-the-art research and development is on vehicle batteries since their properties primarily determine vehicles' characteristic parameters, such as price, driving range, charging time, and lifetime. State-of-the-art battery packs consist of invariable configurations of battery cells, connected in series and parallel. A promising alternative is battery systems based on multilevel inverters, which can alter the configuration of the battery cells during operation via semiconductor switches. The main benefit of such topologies is that a three-phase AC voltage can be directly generated from the battery pack, and no separate power inverters are required. Therefore, modular battery systems based on different multilevel inverter topologies and reconfigurable battery systems are currently under investigation. Another advantage of the multilevel concept is that the possibility to reconfigure the battery pack allows battery cells with different states of charge (SoC) to be connected in parallel, and thus low-loss balancing can take place between such cells. In contrast, in conventional battery systems, parallel connected (hard-wired) battery cells are discharged via bleeder resistors to keep the individual SoCs of the parallel battery strands balanced, ultimately reducing the vehicle range. Different multilevel inverter topologies and reconfigurable batteries have been described in the available literature that makes the before-mentioned advantages possible. However, what has not yet been described is how an intelligent operating algorithm needs to look like to keep the SoCs of the individual battery strands of a modular battery system with integrated power electronics balanced. Therefore, this paper suggests an SoC balancing approach for Battery Modular Multilevel Management (BM3) converter systems, which can be similarly used for reconfigurable battery systems or other multilevel inverter topologies with parallel connectivity. The here suggested approach attempts to simultaneously utilize all converter modules (bypassing individual modules should be avoided) because the parallel connection of adjacent modules reduces the phase-strand's battery impedance. Furthermore, the presented approach tries to reduce the number of switching events when changing the switching state combination. Thereby, the ohmic battery losses and switching losses are kept as low as possible. Since no power is dissipated in any designated bleeder resistors and no designated active balancing circuitry is required, the suggested approach can be categorized as a proactive balancing approach. To verify the algorithm's validity, simulations are used.Keywords: battery management system, BEV, battery modular multilevel management (BM3), SoC balancing
Procedia PDF Downloads 119863 Modified Polysaccharide as Emulsifier in Oil-in-Water Emulsions
Authors: Tatiana Marques Pessanha, Aurora Perez-Gramatges, Regina Sandra Veiga Nascimento
Abstract:
Emulsions are commonly used in applications involving oil/water dispersions, where handling of interfaces becomes a crucial aspect. The use of emulsion technology has greatly evolved in the last decades to suit the most diverse uses, ranging from cosmetic products and biomedical adjuvants to complex industrial fluids. The stability of these emulsions is influenced by factors such as the amount of oil, size of droplets and emulsifiers used. While commercial surfactants are typically used as emulsifiers to reduce interfacial tension, and therefore increase emulsion stability, these organic amphiphilic compounds are often toxic and expensive. A suitable alternative for emulsifiers can be obtained from the chemical modification of polysaccharides. Our group has been working on modification of polysaccharides to be used as additives in a variety of fluid formulations. In particular, we have obtained promising results using chitosan, a natural and biodegradable polymer that can be easily modified due to the presence of amine groups in its chemical structure. In this way, it is possible to increase both the hydrophobic and hydrophilic character, which renders a water-soluble, amphiphilic polymer that can behave as an emulsifier. The aim of this work was the synthesis of chitosan derivatives structurally modified to act as surfactants in stable oil-in-water. The synthesis of chitosan derivatives occurred in two steps, the first being the hydrophobic modification with the insertion of long hydrocarbon chains, while the second step consisted in the cationization of the amino groups. All products were characterized by infrared spectroscopy (FTIR) and carbon magnetic resonance (13C-NMR) to evaluate the cationization and hydrofobization degrees. These modified polysaccharides were used to formulate oil-in water (O:W) emulsions with different oil/water ratios (i.e 25:75, 35:65, 60:40) using mineral paraffinic oil. The formulations were characterized according to the type of emulsion, density and rheology measurements, as well as emulsion stability at high temperatures. All emulsion formulations were stable for at least 30 days, at room temperature (25°C), and in the case of the high oil content emulsion (60:40), the formulation was also stable at temperatures up to 100°C. Emulsion density was in the range of 0.90-0.87 s.g. The rheological study showed a viscoelastic behaviour in all formulations at room temperature, which is in agreement with the high stability showed by the emulsions, since the polymer acts not only reducing interfacial tension, but also forming an elastic membrane at the oil/water interface that guarantees its integrity. The results obtained in this work are a strong evidence of the possibility of using chemically modified polysaccharides as environmentally friendly alternatives to commercial surfactants in the stabilization of oil-in water formulations.Keywords: emulsion, polymer, polysaccharide, stability, chemical modification
Procedia PDF Downloads 352862 Sovereign Debt Restructuring: A Study of the Inadequacies of the Contractual Approach
Authors: Salamah Ansari
Abstract:
In absence of a comprehensive international legal regime for sovereign debt restructuring, majority of the complications arising from sovereign debt restructuring are frequently left to the uncertain market forces. The resort to market forces for sovereign debt restructuring has led to a phenomenal increase in litigations targeting assets of defaulting sovereign nations, internationally across jurisdictions with the first major wave of lawsuits against sovereigns in the 1980s with the Latin American crisis. Recent experiences substantiate that majority of obstacles faced during sovereign debt restructuring process are caused by inefficient creditor coordination and collective action problems. Collective action problems manifest as grab race, rush to exits, holdouts, the free rider problem and the rush to the courthouse. On defaulting, for a nation to successfully restructure its debt, all the creditors involved must accept some reduction in the value of their claims. As a single holdout creditor has the potential to undermine the restructuring process, hold-out creditors are snowballing with the increasing probability of earning high returns through litigations. This necessitates a mechanism to avoid holdout litigations and reinforce collective action on the part of the creditor. This can be done either through a statutory reform or through market-based contractual approach. In absence of an international sovereign bankruptcy regime, the impetus is mostly on inclusion of collective action clauses in debt contracts. The preference to contractual mechanisms vis- a vis a statutory approach can be explained with numerous reasons, but that's only part of the puzzle in trying to understand the economics of the underlying system. The contractual approach proposals advocate the inclusion of certain clauses in the debt contract for an orderly debt restructuring. These include clauses such as majority voting clauses, sharing clauses, non- acceleration clauses, initiation clauses, aggregation clauses, temporary stay on litigation clauses, priority financing clauses, and complete revelation of relevant information. However, voluntary market based contractual approach to debt workouts has its own complexities. It is a herculean task to enshrine clauses in debt contracts that are detailed enough to create an orderly debt restructuring mechanism while remaining attractive enough for creditors. Introduction of collective action clauses into debt contracts can reduce the barriers in efficient debt restructuring and also have the potential to improve the terms on which sovereigns are able to borrow. However, it should be borne in mind that such clauses are not a panacea to the huge institutional inadequacy that persists and may lead to worse restructuring outcomes.Keywords: sovereign debt restructuring, collective action clauses, hold out creditors, litigations
Procedia PDF Downloads 155861 Multidisciplinary Approach to Mio-Plio-Quaternary Aquifer Study in the Zarzis Region (Southeastern Tunisia)
Authors: Ghada Ben Brahim, Aicha El Rabia, Mohamed Hedi Inoubli
Abstract:
Climate change has exacerbated disparities in the distribution of water resources in Tunisia, resulting in significant degradation in quantity and quality over the past five decades. The Mio-Plio-Quaternary aquifer, the primary water source in the Zarzis region, is subject to climatic, geographical, and geological challenges, as well as human stress. The region is experiencing uneven distribution and growing threats from groundwater salinity and saltwater intrusion. Addressing this challenge is critical for the arid region’s socioeconomic development, and effective water resource management is required to combat climate change and reduce water deficits. This study uses a multidisciplinary approach to determine the groundwater potential of this aquifer, involving geophysics and hydrogeology data analysis. We used advanced techniques such as 3D Euler deconvolution and power spectrum analysis to generate detailed anomaly maps and estimate the depths of density sources, identifying significant Bouguer anomalies trending E-W, NW-SE, and NE-SW. Various techniques, such as wavelength filtering, upward continuation, and horizontal and vertical derivatives, were used to improve the gravity data, resulting in consistent results for anomaly shapes and amplitudes. The Euler deconvolution method revealed two prominent surface faults, trending NE-SW and NW-SE, that have a significant impact on the distribution of sedimentary facies and water quality within the Mio-Plio-Quaternary aquifer. Additionally, depth maxima greater than 1400 m to the North indicate the presence of a Cretaceous paleo-fault. Geoelectrical models and resistivity pseudo-sections were used to interpret the distribution of electrical facies in the Mio-Plio-Quaternary aquifer, highlighting lateral variation and depositional environment type. AI optimises the analysis and interpretation of exploration data, which is important to long-term management and water security. Machine learning algorithms and deep learning models analyse large datasets to provide precise interpretations of subsurface conditions, such as aquifer salinisation. However, AI has limitations, such as the requirement for large datasets, the risk of overfitting, and integration issues with traditional geological methods.Keywords: mio-plio-quaternary aquifer, Southeastern Tunisia, geophysical methods, hydrogeological analysis, artificial intelligence
Procedia PDF Downloads 13860 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 148859 Rain Gauges Network Optimization in Southern Peninsular Malaysia
Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno
Abstract:
Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.Keywords: geostatistics, simulated annealing, semivariogram, optimization
Procedia PDF Downloads 301858 Evaluating the Service Quality and Customers’ Satisfaction for Lihpaoland in Taiwan
Authors: Wan-Yu Liu, Tiffany April Lin, Yu-Chieh Tang, Yi-Lin Wang, Chieh-Hui Li
Abstract:
As the national income in Taiwan has been raised, the life style of the public has also been changed, so that the tourism industry gradually moves from a service industry to an experience economy. The Lihpaoland is one of the most popular theme parks in Taiwan. However, the related works on performance of service quality of the park have been lacking since its re-operation in 2012. Therefore, this study investigates the quality of software/hardware facilities and services of the Lihpaoland, and aims to achieve the following three goals: 1) analyzing how various sample data of tourists leads to different results for service quality of LihpaoLand; 2) analyzing how tourists respond to the service tangibility, service reliability, service responsiveness, service guarantee, and service empathy of LihpaoLand; 3) according to the theoretical and empirical results, proposing how to improve the overall facilities and services of LihpaoLand, and hoping to provide suggestions to the LihpaoLand or other related businesses to make decision. The survey was conducted on the tourists to the LihpaoLand using convenience sampling, and 400 questionnaires were collected successfully. Analysis results show that tourists paid much attention to maintenance of amusement facilities and safety of the park, and were satisfied with them, which are great advantages of the park. However, transportation around the LihpaoLand was inadequate, and the price of the Fullon hotel (which is the hotel closest to the LihpaoLand) were not accepted by tourists – more promotion events are recommended. Additionally, the shows are not diversified, and should be improved with the highest priority. Tourists did not pay attention to service personnel’s clothing and the ticket price, but they were not satisfied with them. Hence, this study recommends to design more distinctive costumes and conduct ticket promotions. Accordingly, the suggestions made in this study for LihpaoLand are stated as follows: 1) Diversified amusement facilities should be provided to satisfy the needs at different ages. 2) Cheep but tasty catering and more distinctive souvenirs should be offered. 3) Diversified propaganda schemes should be strengthened to increase number of tourists. 4) Quality and professional of the service staff should be enhanced to acquire public praise and tourists revisiting. 5) Ticket promotions in peak seasons, low seasons, and special events should be conducted. 6) Proper traffic flows should be planned and combined with technologies to reduce waiting time of tourists. 7) The features of theme landscape in LihpaoLand should be strengthened to increase willingness of the tourists with special preferences to visit the park. 8) Ticket discounts or premier points card promotions should be adopted to reward the tourists with high loyalty.Keywords: service quality, customers’ satisfaction, theme park, Taiwan
Procedia PDF Downloads 471857 Spatial Distribution and Cluster Analysis of Sexual Risk Behaviors and STIs Reported by Chinese Adults in Guangzhou, China: A Representative Population-Based Study
Authors: Fangjing Zhou, Wen Chen, Brian J. Hall, Yu Wang, Carl Latkin, Li Ling, Joseph D. Tucker
Abstract:
Background: Economic and social reforms designed to open China to the world has been successful, but also appear to have rapidly laid the foundation for the reemergence of STIs since 1980s. Changes in sexual behaviors, relationships, and norms among Chinese contributed to the STIs epidemic. As the massive population moved during the last 30 years, early coital debut, multiple sexual partnerships, and unprotected sex have increased within the general population. Our objectives were to assess associations between residences location, sexual risk behaviors and sexually transmitted infections (STIs) among adults living in Guangzhou, China. Methods: Stratified cluster sampling followed a two-step process was used to select populations aged 18-59 years in Guangzhou, China. Spatial methods including Geographic Information Systems (GIS) were utilized to identify 1400 coordinates with latitude and longitude. Face-to-face household interviews were conducted to collect self-report data on sexual risk behaviors and diagnosed STIs. Kulldorff’s spatial scan statistic was implemented to identify and detect spatial distribution and clusters of sexual risk behaviors and STIs. The presence and location of statistically significant clusters were mapped in the study areas using ArcGIS software. Results: In this study, 1215 of 1400 households attempted surveys, with 368 refusals, resulting in a sample of 751 completed surveys. The prevalence of self-reported sexual risk behaviors was between 5.1% and 50.0%. The self-reported lifetime prevalence of diagnosed STIs was 7.06%. Anal intercourse clustered in an area located along the border within the rural-urban continuum (p=0.001). High rate clusters for alcohol or other drugs using before sex (p=0.008) and migrants who lived in Guangzhou less than one year (p=0.007) overlapped this cluster. Excess cases for sex without a condom (p=0.031) overlapped the cluster for college students (p<0.001). Conclusions: Short-term migrants and college students reported greater sexual risk behaviors. Programs to increase safer sex within these communities to reduce the risk of STIs are warranted in Guangzhou. Spatial analysis identified geographical clusters of sexual risk behaviors, which is critical for optimizing surveillance and targeting control measures for these locations in the future.Keywords: cluster analysis, migrant, sexual risk behaviors, spatial distribution
Procedia PDF Downloads 340856 In situ Stabilization of Arsenic in Soils with Birnessite and Goethite
Authors: Saeed Bagherifam, Trevor Brown, Chris Fellows, Ravi Naidu
Abstract:
Over the last century, rapid urbanization, industrial emissions, and mining activities have resulted in widespread contamination of the environment by heavy metal(loid)s. Arsenic (As) is a toxic metalloid belonging to group 15 of the periodic table, which occurs naturally at low concentrations in soils and the earth’s crust, although concentrations can be significantly elevated in natural systems as a result of dispersion from anthropogenic sources, e.g., mining activities. Bioavailability is the fraction of a contaminant in soils that is available for uptake by plants, food chains, and humans and therefore presents the greatest risk to terrestrial ecosystems. Numerous attempts have been made to establish in situ and ex-situ technologies of remedial action for remediation of arsenic-contaminated soils. In situ stabilization techniques are based on deactivation or chemical immobilization of metalloid(s) in soil by means of soil amendments, which consequently reduce the bioavailability (for biota) and bioaccessibility (for humans) of metalloids due to the formation of low-solubility products or precipitates. This study investigated the effectiveness of two different types of synthetic manganese and iron oxides (birnessite and goethite) for stabilization of As in a soil spiked with 1000 mg kg⁻¹ of As and treated with 10% dosages of soil amendments. Birnessite was made using HCl and KMnO₄, and goethite was synthesized by the dropwise addition of KOH into Fe(NO₃) solution. The resulting contaminated soils were subjected to a series of chemical extraction studies including sequential extraction (BCR method), single-step extraction with distilled (DI) water, 2M HNO₃ and simplified bioaccessibility extraction tests (SBET) for estimation of bioaccessible fractions of As in two different soil fractions ( < 250 µm and < 2 mm). Concentrations of As in samples were measured using inductively coupled plasma mass spectrometry (ICP-MS). The results showed that soil with birnessite reduced bioaccessibility of As by up to 92% in both soil fractions. Furthermore, the results of single-step extractions revealed that the application of both birnessite and Goethite reduced DI water and HNO₃ extractable amounts of arsenic by 75, 75, 91, and 57%, respectively. Moreover, the results of the sequential extraction studies showed that both birnessite and goethite dramatically reduced the exchangeable fraction of As in soils. However, the amounts of recalcitrant fractions were higher in birnessite, and Goethite amended soils. The results revealed that the application of both birnessite and goethite significantly reduced bioavailability and the exchangeable fraction of As in contaminated soils, and therefore birnessite and Goethite amendments might be considered as promising adsorbents for stabilization and remediation of As contaminated soils.Keywords: arsenic, bioavailability, in situ stabilisation, metalloid(s) contaminated soils
Procedia PDF Downloads 134855 Assessment of Pedestrian Comfort in a Portuguese City Using Computational Fluid Dynamics Modelling and Wind Tunnel
Authors: Bruno Vicente, Sandra Rafael, Vera Rodrigues, Sandra Sorte, Sara Silva, Ana Isabel Miranda, Carlos Borrego
Abstract:
Wind comfort for pedestrians is an important condition in urban areas. In Portugal, a country with 900 km of coastline, the wind direction are predominantly from Nor-Northwest with an average speed of 2.3 m·s -1 (at 2 m height). As a result, a set of city authorities have been requesting studies of pedestrian wind comfort for new urban areas/buildings, as well as to mitigate wind discomfort issues related to existing structures. This work covers the efficiency evaluation of a set of measures to reduce the wind speed in an outdoor auditorium (open space) located in a coastal Portuguese urban area. These measures include the construction of barriers, placed at upstream and downstream of the auditorium, and the planting of trees, placed upstream of the auditorium. The auditorium is constructed in the form of a porch, aligned with North direction, driving the wind flow within the auditorium, promoting channelling effects and increasing its speed, causing discomfort in the users of this structure. To perform the wind comfort assessment, two approaches were used: i) a set of experiments using the wind tunnel (physical approach), with a representative mock-up of the study area; ii) application of the CFD (Computational Fluid Dynamics) model VADIS (numerical approach). Both approaches were used to simulate the baseline scenario and the scenarios considering a set of measures. The physical approach was conducted through a quantitative method, using hot-wire anemometer, and through a qualitative analysis (visualizations), using the laser technology and a fog machine. Both numerical and physical approaches were performed for three different velocities (2, 4 and 6 m·s-1 ) and two different directions (NorNorthwest and South), corresponding to the prevailing wind speed and direction of the study area. The numerical results show an effective reduction (with a maximum value of 80%) of the wind speed inside the auditorium, through the application of the proposed measures. A wind speed reduction in a range of 20% to 40% was obtained around the audience area, for a wind direction from Nor-Northwest. For southern winds, in the audience zone, the wind speed was reduced from 60% to 80%. Despite of that, for southern winds, the design of the barriers generated additional hot spots (high wind speed), namely, in the entrance to the auditorium. Thus, a changing in the location of the entrance would minimize these effects. The results obtained in the wind tunnel compared well with the numerical data, also revealing the high efficiency of the purposed measures (for both wind directions).Keywords: urban microclimate, pedestrian comfort, numerical modelling, wind tunnel experiments
Procedia PDF Downloads 229854 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System
Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar
Abstract:
Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel
Procedia PDF Downloads 133853 The Situation of Transgender Individuals Was Worsened During Covid-19
Authors: Kajal Attri
Abstract:
Introduction: Transgender people are considered third gender in India, although they still face identification issues and alienated from society. Furthermore, they face several challenges, including discrimination in employment, resources, education, and property as a result, most transgender people make a living through begging at traffic lights, trains, and buses; attending auspicious occasions such as childbirth and weddings; and engaging in sex work, which includes both home-based and street-based sex work. During COVID-19, maintaining social distance exacerbates transgender people's circumstances and prevents them from accessing health care services, sexual reassignment surgery, identity-based resources, government security, and financial stability. Nonetheless, the pandemic raised unfavorable attitudes about transgender persons, such as unsupportive family members and trouble forming emotional relationships. This study focuses on how we missed transgender people during COVID-19 to provide better facilities to cope with this situation when they are already the most vulnerable segment of the society. Methodology: The research was conducted using secondary data from published publications and grey literature obtained from four databases: Pubmed, Psychinfo, Science direct, and Google scholar. The literature included total 25 articles that met the inclusion criteria for a review. Result and Discussion: Transgender people, who are considered the most vulnerable sector of society, have already faced several obstacles as a result of the outbreak. The analysis underscores the difficulties that transgender persons faced during COVID-19, such as, They had trouble accessing the government's social security programmes during the lockdown, which provide rations and pensions since they lack the necessary identifying cards. The impact of COVID-19 leaves transgender people at heightened risk of poverty and ill health because they exist on the margins of society, those livelihood base on sex work, begging, and participation on auspicious occasions. They had a significant risk of contracting SARS-CoV2 because they lived in congested areas or did not have permanent shelter, and they were predominantly infected with HIV, cancer, and other non-communicable illnesses. The pandemic raised unfavorable attitudes about transgender persons, such as unsupportive family members and trouble forming emotional relationships. Conclusion: The study comes forward with useful suggestions based on content analysis and information to reduce the existing woes of transgenders during any pandemic like COVID-19.Keywords: COVID-19, transgender, lockdown, transwomen, stigmatization
Procedia PDF Downloads 75852 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach
Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman
Abstract:
Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.Keywords: categorical data, log linear modeling, neural network, shifting cultivation
Procedia PDF Downloads 52851 Predicting Resistance of Commonly Used Antimicrobials in Urinary Tract Infections: A Decision Tree Analysis
Authors: Meera Tandan, Mohan Timilsina, Martin Cormican, Akke Vellinga
Abstract:
Background: In general practice, many infections are treated empirically without microbiological confirmation. Understanding susceptibility of antimicrobials during empirical prescribing can be helpful to reduce inappropriate prescribing. This study aims to apply a prediction model using a decision tree approach to predict the antimicrobial resistance (AMR) of urinary tract infections (UTI) based on non-clinical features of patients over 65 years. Decision tree models are a novel idea to predict the outcome of AMR at an initial stage. Method: Data was extracted from the database of the microbiological laboratory of the University Hospitals Galway on all antimicrobial susceptibility testing (AST) of urine specimens from patients over the age of 65 from January 2011 to December 2014. The primary endpoint was resistance to common antimicrobials (Nitrofurantoin, trimethoprim, ciprofloxacin, co-amoxiclav and amoxicillin) used to treat UTI. A classification and regression tree (CART) model was generated with the outcome ‘resistant infection’. The importance of each predictor (the number of previous samples, age, gender, location (nursing home, hospital, community) and causative agent) on antimicrobial resistance was estimated. Sensitivity, specificity, negative predictive (NPV) and positive predictive (PPV) values were used to evaluate the performance of the model. Seventy-five percent (75%) of the data were used as a training set and validation of the model was performed with the remaining 25% of the dataset. Results: A total of 9805 UTI patients over 65 years had their urine sample submitted for AST at least once over the four years. E.coli, Klebsiella, Proteus species were the most commonly identified pathogens among the UTI patients without catheter whereas Sertia, Staphylococcus aureus; Enterobacter was common with the catheter. The validated CART model shows slight differences in the sensitivity, specificity, PPV and NPV in between the models with and without the causative organisms. The sensitivity, specificity, PPV and NPV for the model with non-clinical predictors was between 74% and 88% depending on the antimicrobial. Conclusion: The CART models developed using non-clinical predictors have good performance when predicting antimicrobial resistance. These models predict which antimicrobial may be the most appropriate based on non-clinical factors. Other CART models, prospective data collection and validation and an increasing number of non-clinical factors will improve model performance. The presented model provides an alternative approach to decision making on antimicrobial prescribing for UTIs in older patients.Keywords: antimicrobial resistance, urinary tract infection, prediction, decision tree
Procedia PDF Downloads 253850 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer
Authors: Feng-Sheng Wang, Chao-Ting Cheng
Abstract:
Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution
Procedia PDF Downloads 78849 Stress Reduction Techniques for First Responders: Scientifically Proven Methods
Authors: Esther Ranero Carrazana, Maria Karla Ramirez Valdes
Abstract:
First responders, including firefighters, police officers, and emergency medical personnel, are frequently exposed to high-stress scenarios that significantly increase their risk of mental health issues such as depression, anxiety, and post-traumatic stress disorder (PTSD). Their work involves life-threatening situations, witnessing suffering, and making critical decisions under pressure, all contributing to psychological strain. The objectives of this research on "Stress Reduction Techniques for First Responders: Scientifically Proven Methods" are as follows. One of them is to evaluate the effectiveness of stress reduction techniques. The primary objective is to assess the efficacy of various scientifically proven stress reduction techniques explicitly tailored for first responders. Heart Rate Variability (HRV) Training, Interoception and Exteroception, Sensory Integration, and Body Perception Awareness are scrutinized for their ability to mitigate stress-related symptoms. Furthermore, we evaluate and enhance the understanding of stress mechanisms in first responders by exploring how different techniques influence the physiological and psychological responses to stress. The study aims to deepen the understanding of stress mechanisms in high-risk professions. Additionally, the study promotes psychological resilience by seeking to identify and recommend methods that can significantly enhance the psychological resilience of first responders, thereby supporting their mental health and operational efficiency in high-stress environments. Guide training and policy development is an additional objective to provide evidence-based recommendations that can be used to guide training programs and policy development aimed at improving the mental health and well-being of first responders. Lastly, the study aims to contribute valuable insights to the existing body of knowledge in stress management, specifically tailored to the unique needs of first responders. This study involved a comprehensive literature review assessing the effectiveness of various stress reduction techniques tailored for first responders. Techniques evaluated include Heart Rate Variability (HRV) Training, Interoception and Exteroception, Sensory Integration, and Body Perception Awareness, focusing on their ability to alleviate stress-related symptoms. The review indicates promising results for several stress reduction methods. HRV Training demonstrates the potential to reflect stress vulnerability and enhance physiological and behavioral flexibility. Interoception and Exteroception help modulate the stress response by enhancing awareness of the body's internal state and its interaction with the environment. Sensory integration plays a crucial role in adaptive responses to stress by focusing on individual senses and their integration. Therefore, body perception awareness addresses stress and anxiety through enhanced body perception and mindfulness. The evaluated techniques show significant potential in reducing stress and improving the mental health of first responders. Implementing these scientifically supported methods into routine training could significantly enhance their psychological resilience and operational effectiveness in high-stress environments.Keywords: first responders, HRV training, mental health, sensory integration, stress reduction
Procedia PDF Downloads 37