Search results for: numerical groundwater flow modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10557

Search results for: numerical groundwater flow modeling

237 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load

Authors: Ahmad Saadiq, Neeraj Sahu

Abstract:

Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.

Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve

Procedia PDF Downloads 308
236 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 44
235 Scenario-Based Scales and Situational Judgment Tasks to Measure the Social and Emotional Skills

Authors: Alena Kulikova, Leonid Parmaksiz, Ekaterina Orel

Abstract:

Social and emotional skills are considered by modern researchers as predictors of a person's success both in specific areas of activity and in the life of a person as a whole. The popularity of this scientific direction ensures the emergence of a large number of practices aimed at developing and evaluating socio-emotional skills. Assessment of social and emotional development is carried out at the national level, as well as at the level of individual regions and institutions. Despite the fact that many of the already existing social and emotional skills assessment tools are quite convenient and reliable, there are now more and more new technologies and task formats which improve the basic characteristics of the tools. Thus, the goal of the current study is to develop a tool for assessing social and emotional skills such as emotion recognition, emotion regulation, empathy and a culture of self-care. To develop a tool assessing social and emotional skills, Rasch-Gutman scenario-based approach was used. This approach has shown its reliability and merit for measuring various complex constructs: parental involvement; teacher practices that support cultural diversity and equity; willingness to participate in the life of the community after psychiatric rehabilitation; educational motivation and others. To assess emotion recognition, we used a situational judgment task based on OCC (Ortony, Clore, and Collins) emotions theory. The main advantage of these two approaches compare to classical Likert scales is that it reduces social desirability in answers. A field test to check the psychometric properties of the developed instrument was conducted. The instrument was developed for the presidential autonomous non-profit organization “Russia - Land of Opportunity” for nationwide soft skills assessment among higher education students. The sample for the field test consisted of 500 people, students aged from 18 to 25 (mean = 20; standard deviation 1.8), 71% female. 67% of students are only studying and are not currently working and 500 employed adults aged from 26 to 65 (mean = 42.5; SD 9), 57% female. Analysis of the psychometric characteristics of the scales was carried out using the methods of IRT (Item Response Theory). A one-parameter rating scale model RSM (Rating scale model) and Graded Response model (GRM) of the modern testing theory were applied. GRM is a polyatomic extension of the dichotomous two-parameter model of modern testing theory (2PL) based on the cumulative logit function for modeling the probability of a correct answer. The validity of the developed scales was assessed using correlation analysis and MTMM (multitrait-multimethod matrix). The developed instrument showed good psychometric quality and can be used by HR specialists or educational management. The detailed results of a psychometric study of the quality of the instrument, including the functioning of the tasks of each scale, will be presented. Also, the results of the validity study by MTMM analysis will be discussed.

Keywords: social and emotional skills, psychometrics, MTMM, IRT

Procedia PDF Downloads 58
234 Auto Surgical-Emissive Hand

Authors: Abhit Kumar

Abstract:

The world is full of master slave Telemanipulator where the doctor’s masters the console and the surgical arm perform the operations, i.e. these robots are passive robots, what the world needs to focus is that in use of these passive robots we are acquiring doctors for operating these console hence the utilization of the concept of robotics is still not fully utilized ,hence the focus should be on active robots, Auto Surgical-Emissive Hand use the similar concept of active robotics where this anthropomorphic hand focuses on the autonomous surgical, emissive and scanning operation, enabled with the vision of 3 way emission of Laser Beam/-5°C < ICY Steam < 5°C/ TIC embedded in palm of the anthropomorphic hand and structured in a form of 3 way disc. Fingers of AS-EH (Auto Surgical-Emissive Hand) as called, will have tactile, force, pressure sensor rooted to it so that the mechanical mechanism of force, pressure and physical presence on the external subject can be maintained, conversely our main focus is on the concept of “emission” the question arises how all the 3 non related methods will work together that to merged in a single programmed hand, all the 3 methods will be utilized according to the need of the external subject, the laser if considered will be emitted via a pin sized outlet, this radiation is channelized via a thin channel which further connect to the palm of the surgical hand internally leading to the pin sized outlet, here the laser is used to emit radiation enough to cut open the skin for removal of metal scrap or any other foreign material while the patient is in under anesthesia, keeping the complexity of the operation very low, at the same time the TIC fitted with accurate temperature compensator will be providing us the real time feed of the surgery in the form of heat image, this gives us the chance to analyze the level, also ATC will help us to determine the elevated body temperature while the operation is being proceeded, the thermal imaging camera in rooted internally in the AS-EH while also being connected to the real time software externally to provide us live feedback. The ICY steam will provide the cooling effect before and after the operation, however for more utilization of this concept we can understand the working of simple procedure in which If a finger remain in icy water for a long time it freezes the blood flow stops and the portion become numb and isolated hence even if you try to pinch it will not provide any sensation as the nerve impulse did not coordinated with the brain hence sensory receptor did not got active which means no sense of touch was observed utilizing the same concept we can use the icy stem to be emitted via a pin sized hole on the area of concern ,temperature below 273K which will frost the area after which operation can be done, this steam can also be use to desensitized the pain while the operation in under process. The mathematical calculation, algorithm, programming of working and movement of this hand will be installed in the system prior to the procedure, since this AS-EH is a programmable hand it comes with the limitation hence this AS-EH robot will perform surgical process of low complexity only.

Keywords: active robots, algorithm, emission, icy steam, TIC, laser

Procedia PDF Downloads 338
233 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 66
232 Evaluation of the Performance Measures of Two-Lane Roundabout and Turbo Roundabout with Varying Truck Percentages

Authors: Evangelos Kaisar, Anika Tabassum, Taraneh Ardalan, Majed Al-Ghandour

Abstract:

The economy of any country is dependent on its ability to accommodate the movement and delivery of goods. The demand for goods movement and services increases truck traffic on highways and inside the cities. The livability of most cities is directly affected by the congestion and environmental impacts of trucks, which are the backbone of the urban freight system. Better operation of heavy vehicles on highways and arterials could lead to the network’s efficiency and reliability. In many cases, roundabouts can respond better than at-level intersections to enable traffic operations with increased safety for both cars and heavy vehicles. Recently emerged, the concept of turbo-roundabout is a viable alternative to the two-lane roundabout aiming to improve traffic efficiency. The primary objective of this study is to evaluate the operation and performance level of an at-grade intersection, a conventional two-lane roundabout, and a basic turbo roundabout for freight movements. To analyze and evaluate the performances of the signalized intersections and the roundabouts, micro simulation models were developed PTV VISSIM. The networks chosen for this analysis in this study are to experiment and evaluate changes in the performance of the movement of vehicles with different geometric and flow scenarios. There are several scenarios that were examined when attempting to assess the impacts of various geometric designs on vehicle movements. The overall traffic efficiency depends on the geometric layout of the intersections, which consists of traffic congestion rate, hourly volume, frequency of heavy vehicles, type of road, and the ratio of major-street versus side-street traffic. The traffic performance was determined by evaluating the delay time, number of stops, and queue length of each intersection for varying truck percentages. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. More specifically, it is clear that two-lane roundabouts are seen to have shorter queue lengths compared to signalized intersections and turbo-roundabouts. For instance, considering the scenario where the volume is highest, and the truck movement and left turn movement are maximum, the signalized intersection has 3 times, and the turbo-roundabout has 5 times longer queue length than a two-lane roundabout in major roads. Similarly, on minor roads, signalized intersections and turbo-roundabouts have 11 times longer queue lengths than two-lane roundabouts for the same scenario. As explained from all the developed scenarios, while the traffic demand lowers, the queue lengths of turbo-roundabouts shorten. This proves that turbo roundabouts perform well for low and medium traffic demand. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. Finally, this study provides recommendations on the conditions under which different intersections perform better than each other.

Keywords: At-grade intersection, simulation, turbo-roundabout, two-lane roundabout

Procedia PDF Downloads 124
231 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry

Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard

Abstract:

Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.

Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor

Procedia PDF Downloads 312
230 Partnering With Key Stakeholders for Successful Implementation of Inhaled Analgesia for Specific Emergency Department Presentations

Authors: Sarah Hazelwood, Janice Hay

Abstract:

Methoxyflurane is an inhaled analgesic administered via a disposable inhaler, which has been used in Australia for 40 years for the management of pain in children & adults. However, there is a lack of data for methoxyflurane as a frontline analgesic medication within the emergency department (ED). This study will investigate the usefulness of methoxyflurane in a private inner-city ED. The study concluded that the inclusion of all key stakeholders in the prescribing, administering & use of this new process led to comprehensive uptake & vastly positive outcomes for consumer & health professionals. Method: A 12-week prospective pilot study was completed utilizing patients presenting to the ED in pain (numeric pain rating score > 4) that fit the requirement of methoxyflurane use (as outlined in the Australian Prescriber information package). Nurses completed a formatted spreadsheet for each interaction where methoxyflurane was used. Patient demographics, day, time, initial numeric pain score, analgesic response time, the reason for use, staff concern (free text), & patient feedback (free text), & discharge time was documented. When clinical concern was raised, the researcher retrieved & reviewed patient notes. Results: 140 methoxyflurane inhalers were used. 60% of patients were 31 years of age & over (n=82) with 16% aged 70+. The gender split; 51% male: 49% female. Trauma-related pain (57%) saw the highest use of administration, with the evening hours (1500-2259) seeing the greatest numbers used (39%). Tuesday, Thursday & Sunday shared the highest daily use throughout the study. A minimum numerical pain score of 4/10 (n=13, 9%), with the ranges of 5 - 7/10 (moderate pain) being given by almost 50% of patients. Only 3 instances of pain scores increased post use of methoxyflurane (all other entries showed pain score < initial rating). Patients & staff noted obvious analgesic response within 3 minutes (n= 96, 81%, of administration). Nurses documented a change in patient vital signs for 4 of the 15 patient-related concerns; the remaining concerns were due to “gagging” on the taste, or “having a coughing episode”; one patient tried to leave the department before the procedure was attended (very euphoric state). Upon review of the staff concerns – no adverse events occurred & return to therapeutic vitals occurred within 10 minutes. Length of stay for patients was compared with similar presentations (such as dislocated shoulder or ankle fracture) & saw an average 40-minute decrease in time to discharge. Methoxyflurane treatment was rated “positively” by > 80% of patients – with remaining feedback related to mild & transient concerns. Staff similarly noted a positive response to methoxyflurane as an analgesic & as an added tool for frontline analgesic purposes. Conclusion: Methoxyflurane should be used on suitable patient presentations requiring immediate, short term pain relief. As a highly portable, non-narcotic avenue to treat pain this study showed obvious therapeutic benefit, positive feedback, & a shorter length of stay in the ED. By partnering with key stake holders, this study determined methoxyflurane use decreased work load, decreased wait time to analgesia, and increased patient satisfaction.

Keywords: analgesia, benefits, emergency, methoxyflurane

Procedia PDF Downloads 110
229 Sustainable Living Where the Immaterial Matters

Authors: Maria Hadjisoteriou, Yiorgos Hadjichristou

Abstract:

This paper aims to explore and provoke a debate, through the work of the design studio, “living where the immaterial matters” of the architecture department of the University of Nicosia, on the role that the “immaterial matter” can play in enhancing innovative sustainable architecture and viewing the cities as sustainable organisms that always grow and alter. The blurring, juxtaposing binary of immaterial and matter, as the theoretical backbone of the Unit is counterbalanced by the practicalities of the contested sites of the last divided capital Nicosia with its ambiguous green line and the ghost city of Famagusta in the island of Cyprus. Jonathan Hill argues that the ‘immaterial is as important to architecture as the material concluding that ‘Immaterial–Material’ weaves the two together, so that they are in conjunction not opposition’. This understanding of the relationship of the immaterial vs material set the premises and the departing point of our argument, and talks about new recipes for creating hybrid public space that can lead to the unpredictability of a complex and interactive, sustainable city. We hierarchized the human experience as a priority. We distinguish the notion of space and place referring to Heidegger’s ‘building dwelling thinking’: ‘a distinction between space and place, where spaces gain authority not from ‘space’ appreciated mathematically but ‘place’ appreciated through human experience’. Following the above, architecture and the city are seen as one organism. The notions of boundaries, porous borders, fluidity, mobility, and spaces of flows are the lenses of the investigation of the unit’s methodology, leading to the notion of a new hybrid urban environment, where the main constituent elements are in a flux relationship. The material and the immaterial flows of the town are seen interrelated and interwoven with the material buildings and their immaterial contents, yielding to new sustainable human built environments. The above premises consequently led to choices of controversial sites. Indisputably a provoking site was the ghost town of Famagusta where the time froze back in 1974. Inspired by the fact that the nature took over the a literally dormant, decaying city, a sustainable rebirthing was seen as an opportunity where both nature and built environment, material and immaterial are interwoven in a new emergent urban environment. Similarly, we saw the dividing ‘green line’ of Nicosia completely failing to prevent the trespassing of images, sounds and whispers, smells and symbols that define the two prevailing cultures and becoming a porous creative entity which tends to start reuniting instead of separating , generating sustainable cultures and built environments. The authors would like to contribute to the debate by introducing a question about a new recipe of cooking the built environment. Can we talk about a new ‘urban recipe’: ‘cooking architecture and city’ to deliver an ever changing urban sustainable organism, whose identity will mainly depend on the interrelationship of the immaterial and material constituents?

Keywords: blurring zones, porous borders, spaces of flow, urban recipe

Procedia PDF Downloads 400
228 The Analysis of Noise Harmfulness in Public Utility Facilities

Authors: Monika Sobolewska, Aleksandra Majchrzak, Bartlomiej Chojnacki, Katarzyna Baruch, Adam Pilch

Abstract:

The main purpose of the study is to perform the measurement and analysis of noise harmfulness in public utility facilities. The World Health Organization reports that the number of people suffering from hearing impairment is constantly increasing. The most alarming is the number of young people occurring in the statistics. The majority of scientific research in the field of hearing protection and noise prevention concern industrial and road traffic noise as the source of health problems. As the result, corresponding standards and regulations defining noise level limits are enforced. However, there is another field uncovered by profound research – leisure time. Public utility facilities such as clubs, shopping malls, sport facilities or concert halls – they all generate high-level noise, being out of proper juridical control. Among European Union Member States, the highest legislative act concerning noise prevention is the Environmental Noise Directive 2002/49/EC. However, it omits the problem discussed above and even for traffic, railway and aircraft noise it does not set limits or target values, leaving these issues to the discretion of the Member State authorities. Without explicit and uniform regulations, noise level control at places designed for relaxation and entertainment is often in the responsibility of people having little knowledge of hearing protection, unaware of the risk the noise pollution poses. Exposure to high sound levels in clubs, cinemas, at concerts and sports events may result in a progressive hearing loss, especially among young people, being the main target group of such facilities and events. The first step to change this situation and to raise the general awareness is to perform reliable measurements the results of which will emphasize the significance of the problem. This project presents the results of more than hundred measurements, performed in most types of public utility facilities in Poland. As the most suitable measuring instrument for such a research, personal noise dosimeters were used to collect the data. Each measurement is presented in the form of numerical results including equivalent and peak sound pressure levels and a detailed description considering the type of the sound source, size and furnishing of the room and the subjective sound level evaluation. In the absence of a straight reference point for the interpretation of the data, the limits specified in EU Directive 2003/10/EC were used for comparison. They set the maximum sound level values for workers in relation to their working time length. The analysis of the examined problem leads to the conclusion that during leisure time, people are exposed to noise levels significantly exceeding safe values. As the hearing problems are gradually progressing, most people underplay the problem, ignoring the first symptoms. Therefore, an effort has to be made to specify the noise regulations for public utility facilities. Without any action, in the foreseeable future the majority of Europeans will be dealing with serious hearing damage, which will have a negative impact on the whole societies.

Keywords: hearing protection, noise level limits, noise prevention, noise regulations, public utility facilities

Procedia PDF Downloads 198
227 Designing a Thermal Management System for Lithium Ion Battery Packs in Electric Vehicles

Authors: Ekin Esen, Mohammad Alipour, Riza Kizilel

Abstract:

Rechargeable lithium-ion batteries have been replacing lead-acid batteries for the last decade due to their outstanding properties such as high energy density, long shelf life, and almost no memory effect. Besides these, being very light compared to lead acid batteries has gained them their dominant place in the portable electronics market, and they are now the leading candidate for electric vehicles (EVs) and hybrid electric vehicles (HEVs). However, their performance strongly depends on temperature, and this causes some inconveniences for their utilization in extreme temperatures. Since weather conditions vary across the globe, this situation limits their utilization for EVs and HEVs and makes a thermal management system obligatory for the battery units. The objective of this study is to understand thermal characteristics of Li-ion battery modules for various operation conditions and design a thermal management system to enhance battery performance in EVs and HEVs. In the first part of our study, we investigated thermal behavior of commercially available pouch type 20Ah LiFePO₄ (LFP) cells under various conditions. Main parameters were chosen as ambient temperature and discharge current rate. Each cell was charged and discharged at temperatures of 0°C, 10°C, 20°C, 30°C, 40°C, and 50°C. The current rate of charging process was 1C while it was 1C, 2C, 3C, 4C, and 5C for discharge process. Temperatures of 7 different points on the cells were measured throughout charging and discharging with N-type thermocouples, and a detailed temperature profile was obtained. In the second part of our study, we connected 4 cells in series by clinching and prepared 4S1P battery modules similar to ones in EVs and HEVs. Three reference points were determined according to the findings of the first part of the study, and a thermocouple is placed on each reference point on the cells composing the 4S1P battery modules. In the end, temperatures of 6 points in the module and 3 points on the top surface were measured and changes in the surface temperatures were recorded for different discharge rates (0.2C, 0.5C, 0.7C, and 1C) at various ambient temperatures (0°C – 50°C). Afterwards, aluminum plates with channels were placed between the cells in the 4S1P battery modules, and temperatures were controlled with airflow. Airflow was provided with a regular compressor, and the effect of flow rate on cell temperature was analyzed. Diameters of the channels were in mm range, and shapes of the channels were determined in order to make the cell temperatures uniform. Results showed that the designed thermal management system could help keeping the cell temperatures in the modules uniform throughout charge and discharge processes. Other than temperature uniformity, the system was also beneficial to keep cell temperature close to the optimum working temperature of Li-ion batteries. It is known that keeping the temperature at an optimum degree and maintaining uniform temperature throughout utilization can help obtaining maximum power from the cells in battery modules for a longer time. Furthermore, it will increase safety by decreasing the risk of thermal runaways. Therefore, the current study is believed to be beneficial for wider use of Li batteries for battery modules of EVs and HEVs globally.

Keywords: lithium ion batteries, thermal management system, electric vehicles, hybrid electric vehicles

Procedia PDF Downloads 141
226 An Investigation of Tetraspanin Proteins’ Role in UPEC Infection

Authors: Fawzyah Albaldi

Abstract:

Urinary tract infections (UTIs) are the most prevalent of infectious diseases and > 80% are caused by uropathogenic E. coli (UPEC). Infection occurs following adhesion to urothelial plaques on bladder epithelial cells, whose major protein constituent are the uroplakins (UPs). Two of the four uroplakins (UPIa and UPIb) are members of the tetraspanin superfamily. The UPEC adhesin FimH is known to interact directly with UPIa. Tetraspanins are a diverse family of transmembrane proteins that generally act as “molecular organizers” by binding different proteins and lipids to form tetraspanin enriched microdomains (TEMs). Previous work by our group has shown that TEMs are involved in the adhesion of many pathogenic bacteria to human cells. Adhesion can be blocked by tetraspanin-derived synthetic peptides, suggesting that tetraspanins may be valuable drug targets. In this study, we investigate the role of tetraspanins in UPEC adherence to bladder epithelial cells. Human bladder cancer cell lines (T24, 5637, RT4), commonly used as in-vitro models to investigate UPEC infection, along with primary human bladder cells, were used in this project. The aim was to establish a model for UPEC adhesion/infection with the objective of evaluating the impact of tetraspanin-derived reagents on this process. Such reagents could reduce the progression of UTI, particularly in patients with indwelling catheters. Tetraspanin expression on the bladder cells was investigated by q-PCR and flow cytometry, with CD9 and CD81 generally highly expressed. Interestingly, despite these cell lines being used by other groups to investigate FimH antagonists, uroplakin proteins (UPIa, UPIb and UPIII) were poorly expressed at the cell surface, although some were present intracellularly. Attempts were made to differentiate the cell lines, to induce cell surface expression of these UPs, but these were largely unsuccessful. Pre-treatment of bladder epithelial cells with anti-CD9 monoclonal antibody significantly decreased UPEC infection, whilst anti-CD81 had no effects. A short (15aa) synthetic peptide corresponding to the large extracellular region (EC2) of CD9 also significantly reduced UPEC adherence. Furthermore, we demonstrated specific binding of that fluorescently tagged peptide to the cells. CD9 is known to associate with a number of heparan sulphate proteoglycans (HSPGs) that have also been implicated in bacterial adhesion. Here, we demonstrated that unfractionated heparin (UFH)and heparin analogs significantly inhibited UPEC adhesion to RT4 cells, as did pre-treatment of the cells with heparinases. Pre-treatment with chondroitin sulphate (CS) and chondroitinase also significantly decreased UPEC adherence to RT4 cells. This study may shed light on a common pathogenicity mechanism involving the organisation of HSPGs by tetraspanins. In summary, although we determined that the bladder cell lines were not suitable to investigate the role of uroplakins in UPEC adhesion, we demonstrated roles for CD9 and cell surface proteoglycans in this interaction. Agents that target these may be useful in treating/preventing UTIs.

Keywords: UTIs, tspan, uroplakins, CD9

Procedia PDF Downloads 87
225 Collaborative Program Student Community Service as a New Approach for Development in Rural Area in Case of Western Java

Authors: Brian Yulianto, Syachrial, Saeful Aziz, Anggita Clara Shinta

Abstract:

Indonesia, with a population of about two hundred and fifty million people in quantity, indicates the outstanding wealth of human resources. Hundreds of millions of the population scattered in various communities in various regions in Indonesia with the different characteristics of economic, social and unique culture. Broadly speaking, the community in Indonesia is divided into two classes, namely urban communities and rural communities. The rural communities characterized by low potential and management of natural and human resources, limited access of development, and lack of social and economic infrastructure, and scattered and isolated population. West Java is one of the provinces with the largest population in Indonesia. Based on data from the Central Bureau of Statistics in 2015 the number of population in West Java reached 46.7096 million souls spread over 18 districts and 9 cities. The big difference in geographical and social conditions of people in West Java from one region to another, especially the south to the north causing the gap is high. It is closely related to the flow of investment to promote the area. Poverty and underdevelopment are the classic problems that occur on a massive scale in the region as the effects of inequity in development. South Cianjur and Tasikmalaya area South became one of the portraits area where the existing potential has not been capable of prospering society. Tri Dharma College not only define the College as a pioneer implementation of education and research to improve the quality of human resources but also demanded to be a pioneer in the development through the concept of public service. Bandung Institute of Technology as one of the institutions of higher education to implement community service system through collaborative community work program "one of the university community" as one approach to developing villages. The program is based Community Service, where students are not only required to be able to take part in community service, but also able to develop a community development strategy that is comprehensive and integrity in cooperation with government agencies and non-government related as a real form of effort alignment potential, position and role from various parties. Areas of western Java in particular have high poverty rates and disparity. On the other hand, there are three fundamental pillars in the development of rural communities, namely economic development, community development, and the integrated infrastructure development. These pillars require the commitment of all components of community, including the students and colleges for upholding success. College’s community program is one of the approaches in the development of rural communities. ITB is committed to implement as one form of student community service as community-college programs that integrate all elements of the community which is called Kuliah Kerja Nyata-Thematic.

Keywords: development in rural area, collaborative, student community service, Kuliah Kerja Nyata-Thematic ITB

Procedia PDF Downloads 204
224 Removal of Heavy Metals by Ultrafiltration Assisted with Chitosan or Carboxy-Methyl Cellulose

Authors: Boukary Lam, Sebastien Deon, Patrick Fievet, Nadia Crini, Gregorio Crini

Abstract:

Treatment of heavy metal-contaminated industrial wastewater has become a major challenge over the last decades. Conventional processes for the treatment of metal-containing effluents do not always simultaneously satisfy both legislative and economic criteria. In this context, coupling of processes can then be a promising alternative to the conventional approaches used by industry. The polymer-assisted ultrafiltration (PAUF) process is one of these coupling processes. Its principle is based on a sequence of steps with reaction (e.g., complexation) between metal ions and a polymer and a step involving the rejection of the formed species by means of a UF membrane. Unlike free ions, which can cross the UF membrane due to their small size, the polymer/ion species, the size of which is larger than pore size, are rejected. The PAUF process was deeply investigated herein in the case of removal of nickel ions by adding chitosan and carboxymethyl cellulose (CMC). Experiments were conducted with synthetic solutions containing 1 to 100 ppm of nickel ions with or without the presence of NaCl (0.05 to 0.2 M), and an industrial discharge water (containing several metal ions) with and without polymer. Chitosan with a molecular weight of 1.8×105 g mol⁻¹ and a degree of acetylation close to 15% was used. CMC with a degree of substitution of 0.7 and a molecular weight of 9×105 g mol⁻¹ was employed. Filtration experiments were performed under cross-flow conditions with a filtration cell equipped with a polyamide thin film composite flat-sheet membrane (3.5 kDa). Without the step of polymer addition, it was found that nickel rejection decreases from 80 to 0% with increasing metal ion concentration and salt concentration. This behavior agrees qualitatively with the Donnan exclusion principle: the increase in the electrolyte concentration screens the electrostatic interaction between ions and the membrane fixed the charge, which decreases their rejection. It was shown that addition of a sufficient amount of polymer (greater than 10⁻² M of monomer unit) can offset this decrease and allow good metal removal. However, the permeation flux was found to be somewhat reduced due to the increase in osmotic pressure and viscosity. It was also highlighted that the increase in pH (from 3 to 9) has a strong influence on removal performances: the higher pH value, the better removal performance. The two polymers have shown similar performance enhancement at natural pH. However, chitosan has proved more efficient in slightly basic conditions (above its pKa) whereas CMC has demonstrated very weak rejection performances when pH is below its pKa. In terms of metal rejection, chitosan is thus probably the better option for basic or strongly acid (pH < 4) conditions. Nevertheless, CMC should probably be preferred to chitosan in natural conditions (5 < pH < 8) since its impact on the permeation flux is less significant. Finally, ultrafiltration of an industrial discharge water has shown that the increase in metal ion rejection induced by the polymer addition is very low due to the competing phenomenon between the various ions present in the complex mixture.

Keywords: carboxymethyl cellulose, chitosan, heavy metals, nickel ion, polymer-assisted ultrafiltration

Procedia PDF Downloads 137
223 Elastoplastic Modified Stillinger Weber-Potential Based Discretized Virtual Internal Bond and Its Application to the Dynamic Fracture Propagation

Authors: Dina Kon Mushid, Kabutakapua Kakanda, Dibu Dave Mbako

Abstract:

The failure of material usually involves elastoplastic deformation and fracturing. Continuum mechanics can effectively deal with plastic deformation by using a yield function and the flow rule. At the same time, it has some limitations in dealing with the fracture problem since it is a theory based on the continuous field hypothesis. The lattice model can simulate the fracture problem very well, but it is inadequate for dealing with plastic deformation. Based on the discretized virtual internal bond model (DVIB), this paper proposes a lattice model that can account for plasticity. DVIB is a lattice method that considers material to comprise bond cells. Each bond cell may have any geometry with a finite number of bonds. The two-body or multi-body potential can characterize the strain energy of a bond cell. The two-body potential leads to the fixed Poisson ratio, while the multi-body potential can overcome the limitation of the fixed Poisson ratio. In the present paper, the modified Stillinger-Weber (SW), a multi-body potential, is employed to characterize the bond cell energy. The SW potential is composed of two parts. One part is the two-body potential that describes the interatomic interactions between particles. Another is the three-body potential that represents the bond angle interactions between particles. Because the SW interaction can represent the bond stretch and bond angle contribution, the SW potential-based DVIB (SW-DVIB) can represent the various Poisson ratios. To embed the plasticity in the SW-DVIB, the plasticity is considered in the two-body part of the SW potential. It is done by reducing the bond stiffness to a lower level once the bond reaches the yielding point. While before the bond reaches the yielding point, the bond is elastic. When the bond deformation exceeds the yielding point, the bond stiffness is softened to a lower value. When unloaded, irreversible deformation occurs. With the bond length increasing to a critical value, termed the failure bond length, the bond fails. The critical failure bond length is related to the cell size and the macro fracture energy. By this means, the fracture energy is conserved so that the cell size sensitivity problem is relieved to a great extent. In addition, the plasticity and the fracture are also unified at the bond level. To make the DVIB able to simulate different Poisson ratios, the three-body part of the SW potential is kept elasto-brittle. The bond angle can bear the moment before the bond angle increment is smaller than a critical value. By this method, the SW-DVIB can simulate the plastic deformation and the fracturing process of material with various Poisson ratios. The elastoplastic SW-DVIB is used to simulate the plastic deformation of a material, the plastic fracturing process, and the tunnel plastic deformation. It has been shown that the current SW-DVIB method is straightforward in simulating both elastoplastic deformation and plastic fracture.

Keywords: lattice model, discretized virtual internal bond, elastoplastic deformation, fracture, modified stillinger-weber potential

Procedia PDF Downloads 76
222 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction

Authors: M. D. Haneef, R. B. Randall, Z. Peng

Abstract:

Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in the industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration based analysis and wear prediction. This work is an extension of a previous study, in which an engine simulation model was developed using a MATLAB/SIMULINK program, whereby the engine parameters used in the simulation were obtained experimentally from a Toyota 3SFE 2.0 litre petrol engines. Simulated hydrodynamic bearing forces were used to estimate vibrations signals and envelope analysis was carried out to analyze the effect of speed, load and clearance on the vibration response. Three different loads 50/80/110 N-m, three different speeds 1500/2000/3000 rpm, and three different clearances, i.e., normal, 2 times and 4 times the normal clearance were simulated to examine the effect of wear on bearing forces. The magnitude of the squared envelope of the generated vibration signals though not affected by load, but was observed to rise significantly with increasing speed and clearance indicating the likelihood of augmented wear. In the present study, the simulation model was extended further to investigate the bearing wear behavior, resulting as a consequence of different operating conditions, to complement the vibration analysis. In the current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. Also, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journal and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 µm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behavior and on the other hand it also helps to establish a correlation between wear based and vibration based analysis. Therefore, the model provides a cost-effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.

Keywords: condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction

Procedia PDF Downloads 293
221 Enhancement of Radiosensitization by Aptamer 5TR1-Functionalized AgNCs for Triple-Negative Breast Cancer

Authors: Xuechun Kan, Dongdong Li, Fan Li, Peidang Liu

Abstract:

Triple-negative breast cancer (TNBC) is the most malignant subtype of breast cancer with a poor prognosis, and radiotherapy is one of the main treatment methods. However, due to the obvious resistance of tumor cells to radiotherapy, high dose of ionizing radiation is required during radiotherapy, which causes serious damage to normal tissues near the tumor. Therefore, how to improve radiotherapy resistance and enhance the specific killing of tumor cells by radiation is a hot issue that needs to be solved in clinic. Recent studies have shown that silver-based nanoparticles have strong radiosensitization, and silver nanoclusters (AgNCs) also provide a broad prospect for tumor targeted radiosensitization therapy due to their ultra-small size, low toxicity or non-toxicity, self-fluorescence and strong photostability. Aptamer 5TR1 is a 25-base oligonucleotide aptamer that can specifically bind to mucin-1 highly expressed on the membrane surface of TNBC 4T1 cells, and can be used as a highly efficient tumor targeting molecule. In this study, AgNCs were synthesized by DNA template based on 5TR1 aptamer (NC-T5-5TR1), and its role as a targeted radiosensitizer in TNBC radiotherapy was investigated. The optimal DNA template was first screened by fluorescence emission spectroscopy, and NC-T5-5TR1 was prepared. NC-T5-5TR1 was characterized by transmission electron microscopy, ultraviolet-visible spectroscopy and dynamic light scattering. The inhibitory effect of NC-T5-5TR1 on cell activity was evaluated using the MTT method. Laser confocal microscopy was employed to observe NC-T5-5TR1 targeting 4T1 cells and verify its self-fluorescence characteristics. The uptake of NC-T5-5TR1 by 4T1 cells was observed by dark-field imaging, and the uptake peak was evaluated by inductively coupled plasma mass spectrometry. The radiation sensitization effect of NC-T5-5TR1 was evaluated through cell cloning and in vivo anti-tumor experiments. Annexin V-FITC/PI double staining flow cytometry was utilized to detect the impact of nanomaterials combined with radiotherapy on apoptosis. The results demonstrated that the particle size of NC-T5-5TR1 is about 2 nm, and the UV-visible absorption spectrum detection verifies the successful construction of NC-T5-5TR1, and it shows good dispersion. NC-T5-5TR1 significantly inhibited the activity of 4T1 cells and effectively targeted and fluoresced within 4T1 cells. The uptake of NC-T5-5TR1 reached its peak at 3 h in the tumor area. Compared with AgNCs without aptamer modification, NC-T5-5TR1 exhibited superior radiation sensitization, and combined radiotherapy significantly inhibited the activity of 4T1 cells and tumor growth in 4T1-bearing mice. The apoptosis level of NC-T5-5TR1 combined with radiation was significantly increased. These findings provide important theoretical and experimental support for NC-T5-5TR1 as a radiation sensitizer for TNBC.

Keywords: 5TR1 aptamer, silver nanoclusters, radio sensitization, triple-negative breast cancer

Procedia PDF Downloads 33
220 Liposome Loaded Polysaccharide Based Hydrogels: Promising Delayed Release Biomaterials

Authors: J. Desbrieres, M. Popa, C. Peptu, S. Bacaita

Abstract:

Because of their favorable properties (non-toxicity, biodegradability, mucoadhesivity etc.), polysaccharides were studied as biomaterials and as pharmaceutical excipients in drug formulations. These formulations may be produced in a wide variety of forms including hydrogels, hydrogel based particles (or capsules), films etc. In these formulations, the polysaccharide based materials are able to provide local delivery of loaded therapeutic agents but their delivery can be rapid and not easily time-controllable due to, particularly, the burst effect. This leads to a loss in drug efficiency and lifetime. To overcome the consequences of burst effect, systems involving liposomes incorporated into polysaccharide hydrogels may appear as a promising material in tissue engineering, regenerative medicine and drug loading systems. Liposomes are spherical self-closed structures, composed of curved lipid bilayers, which enclose part of the surrounding solvent into their structure. The simplicity of production, their biocompatibility, the size and similar composition of cells, the possibility of size adjustment for specific applications, the ability of hydrophilic or/and hydrophobic drug loading make them a revolutionary tool in nanomedicine and biomedical domain. Drug delivery systems were developed as hydrogels containing chitosan or carboxymethylcellulose (CMC) as polysaccharides and gelatin (GEL) as polypeptide, and phosphatidylcholine or phosphatidylcholine/cholesterol liposomes able to accurately control this delivery, without any burst effect. Hydrogels based on CMC were covalently crosslinked using glutaraldehyde, whereas chitosan based hydrogels were double crosslinked (ionically using sodium tripolyphosphate or sodium sulphate and covalently using glutaraldehyde). It has been proven that the liposome integrity is highly protected during the crosslinking procedure for the formation of the film network. Calcein was used as model active matter for delivery experiments. Multi-Lamellar vesicles (MLV) and Small Uni-Lamellar Vesicles (SUV) were prepared and compared. The liposomes are well distributed throughout the whole area of the film, and the vesicle distribution is equivalent (for both types of liposomes evaluated) on the film surface as well as deeper (100 microns) in the film matrix. An obvious decrease of the burst effect was observed in presence of liposomes as well as a uniform increase of calcein release that continues even at large time scales. Liposomes act as an extra barrier for calcein release. Systems containing MLVs release higher amounts of calcein compared to systems containing SUVs, although these liposomes are more stable in the matrix and diffuse with difficulty. This difference comes from the higher quantity of calcein present within the MLV in relation with their size. Modeling of release kinetics curves was performed and the release of hydrophilic drugs may be described by a multi-scale mechanism characterized by four distinct phases, each of them being characterized by a different kinetics model (Higuchi equation, Korsmeyer-Peppas model etc.). Knowledge of such models will be a very interesting tool for designing new formulations for tissue engineering, regenerative medicine and drug delivery systems.

Keywords: controlled and delayed release, hydrogels, liposomes, polysaccharides

Procedia PDF Downloads 204
219 Developing Geriatric Oral Health Network is a Public Health Necessity for Older Adults

Authors: Maryam Tabrizi, Shahrzad Aarup

Abstract:

Objectives- Understanding the close association between oral health and overall health for older adults at the right time and right place, a person, focus treatment through Project ECHO telementoring. Methodology- Data from monthly ECHO telementoring sessions were provided for three years. Sessions including case presentations, overall health conditions, considering medications, organ functions limitations, including the level of cognition. Contributions- Providing the specialist level of providing care to all elderly regardless of their location and other health conditions and decreasing oral health inequity by increasing workforce via Project ECHO telementoring program worldwide. By 2030, the number of adults in the USA over the age of 65 will increase more than 60% (approx.46 million) and over 22 million (30%) of 74 million older Americans will need specialized geriatrician care. In 2025, a national shortage of medical geriatricians will be close to 27,000. Most individuals 65 and older do not receive oral health care due to lack of access, availability, or affordability. One of the main reasons is a significant shortage of Oral Health (OH) education and resources for the elderly, particularly in rural areas. Poor OH is a social stigma, a thread to quality and safety of overall health of the elderly with physical and cognitive decline. Poor OH conditions may be costly and sometimes life-threatening. Non-traumatic dental-related emergency department use in Texas alone was over $250 M in 2016. Most elderly over the age of 65 present with at least one or multiple chronic diseases such as arthritis, diabetes, heart diseases, and chronic obstructive pulmonary disease (COPD) are at higher risk to develop gum (periodontal) disease, yet they are less likely to get dental care. In addition, most older adults take both prescription and over-the-counter drugs; according to scientific studies, many of these medications cause dry mouth. Reduced saliva flow due to aging and medications may increase the risk of cavities and other oral conditions. Most dental schools have already increased geriatrics OH in their educational curriculums, but the aging population growth worldwide is faster than growing geriatrics dentists. However, without the use of advanced technology and creating a network between specialists and primary care providers, it is impossible to increase the workforce, provide equitable oral health to the elderly. Project ECHO is a guided practice model that revolutionizes health education and increases the workforce to provide best-practice specialty care and reduce health disparities. Training oral health providers for utilizing the Project ECHO model is a logical response to the shortage and increases oral health access to the elderly. Project ECHO trains general dentists & hygienists to provide specialty care services. This means more elderly can get the care they need, in the right place, at the right time, with better treatment outcomes and reduces costs.

Keywords: geriatric, oral health, project echo, chronic disease, oral health

Procedia PDF Downloads 154
218 Modeling Discrimination against Gay People: Predictors of Homophobic Behavior against Gay Men among High School Students in Switzerland

Authors: Patrick Weber, Daniel Gredig

Abstract:

Background and Purpose: Research has well documented the impact of discrimination and micro-aggressions on the wellbeing of gay men and, especially, adolescents. For the prevention of homophobic behavior against gay adolescents, however, the focus has to shift on those who discriminate: For the design and tailoring of prevention and intervention, it is important to understand the factors responsible for homophobic behavior such as, for example, verbal abuse. Against this background, the present study aimed to assess homophobic – in terms of verbally abusive – behavior against gay people among high school students. Furthermore, it aimed to establish the predictors of the reported behavior by testing an explanatory model. This model posits that homophobic behavior is determined by negative attitudes and knowledge. These variables are supposed to be predicted by the acceptance of traditional gender roles, religiosity, orientation toward social dominance, contact with gay men, and by the perceived expectations of parents, friends and teachers. These social-cognitive variables in turn are assumed to be determined by students’ gender, age, immigration background, formal school level, and the discussion of gay issues in class. Method: From August to October 2016, we visited 58 high school classes in 22 public schools in a county in Switzerland, and asked the 8th and 9th year students on three formal school levels to participate in survey about gender and gay issues. For data collection, we used an anonymous self-administered questionnaire filled in during class. Data were analyzed using descriptive statistics and structural equation modelling (Generalized Least Square Estimates method). The sample included 897 students, 334 in the 8th and 563 in the 9th year, aged 12–17, 51.2% being female, 48.8% male, 50.3% with immigration background. Results: A proportion of 85.4% participants reported having made homophobic statements in the 12 month before survey, 4.7% often and very often. Analysis showed that respondents’ homophobic behavior was predicted directly by negative attitudes (β=0.20), as well as by the acceptance of traditional gender roles (β=0.06), religiosity (β=–0.07), contact with gay people (β=0.10), expectations of parents (β=–0.14) and friends (β=–0.19), gender (β=–0.22) and having a South-East-European or Western- and Middle-Asian immigration background (β=0.09). These variables were predicted, in turn, by gender, age, immigration background, formal school level, and discussion of gay issues in class (GFI=0.995, AGFI=0.979, SRMR=0.0169, CMIN/df=1.199, p>0.213, adj. R2 =0.384). Conclusion: Findings evidence a high prevalence of homophobic behavior in the responding high school students. The tested explanatory model explained 38.4% of the assessed homophobic behavior. However, data did not found full support of the model. Knowledge did not turn out to be a predictor of behavior. Except for the perceived expectation of teachers and orientation toward social dominance, the social-cognitive variables were not fully mediated by attitudes. Equally, gender and immigration background predicted homophobic behavior directly. These findings demonstrate the importance of prevention and provide also leverage points for interventions against anti-gay bias in adolescents – also in social work settings as, for example, in school social work, open youth work or foster care.

Keywords: discrimination, high school students, gay men, predictors, Switzerland

Procedia PDF Downloads 312
217 A System for Preventing Inadvertent Exposition of Staff Present outside the Operating Theater: Description and Clinical Test

Authors: Aya Al Masri, Kamel Guerchouche, Youssef Laynaoui, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: Mobile C-arms move throughout operating rooms of the operating theater. Being designed to move between rooms, they are not equipped with relays to retrieve the exposition information and export it outside the room. Therefore, no light signaling is available outside the room to warn the X-ray emission for staff. Inadvertent exposition of staff outside the operating theater is a real problem for radiation protection. The French standard NFC 15-160 require that: (1) access to any room containing an X-ray emitting device must be controlled by a light signage so that it cannot be inadvertently crossed, and (2) setting up an emergency button to stop the X-ray emission. This study presents a system that we developed to meet these requirements and the results of its clinical test. Materials and methods: The system is composed of two communicating boxes: o The "DetectBox" is to be installed inside the operating theater. It identifies the various operation states of the C-arm by analyzing its power supply signal. The DetectBox communicates (in wireless mode) with the second box (AlertBox). o The "AlertBox" can operate in socket or battery mode and is to be installed outside the operating theater. It detects and reports the state of the C-arm by emitting a real time light signal. This latter can have three different colors: red when the C-arm is emitting X-rays, orange when it is powered on but does not emit X-rays, and green when it is powered off. The two boxes communicate on a radiofrequency link exclusively carried out in the ‘Industrial, Scientific and Medical (ISM)’ frequency bands and allows the coexistence of several on-site warning systems without communication conflicts (interference). Taking into account the complexity of performing electrical works in the operating theater (for reasons of hygiene and continuity of medical care), this system (having a size <10 cm²) works in complete safety without any intrusion in the mobile C-arm and does not require specific electrical installation work. The system is equipped with emergency button that stops X-ray emission. The system has been clinically tested. Results: The clinical test of the system shows that: it detects X-rays having both high and low energy (50 – 150 kVp), high and low photon flow (0.5 – 200 mA: even when emitted for a very short time (<1 ms)), Probability of false detection < 10-5, it operates under all acquisition modes (continuous, pulsed, fluoroscopy mode, image mode, subtraction and movie mode), it is compatible with all C-arm models and brands. We have also tested the communication between the two boxes (DetectBox and AlertBox) in several conditions: (1) Unleaded room, (2) leaded room, and (3) rooms with particular configuration (sas, great distances, concrete walls, 3 mm of lead). The result of these last tests was positive. Conclusion: This system is a reliable tool to alert the staff present outside the operating room for X-ray emission and insure their radiation protection.

Keywords: Clinical test, Inadvertent staff exposition, Light signage, Operating theater

Procedia PDF Downloads 107
216 Improving the Uptake of Community-Based Multidrug-Resistant Tuberculosis Treatment Model in Nigeria

Authors: A. Abubakar, A. Parsa, S. Walker

Abstract:

Despite advances made in the diagnosis and management of drug-sensitive tuberculosis (TB) over the past decades, treatment of multidrug-resistant tuberculosis (MDR-TB) remains challenging and complex particularly in high burden countries including Nigeria. Treatment of MDR-TB is cost-prohibitive with success rate generally lower compared to drug-sensitive TB and if care is not taken it may become the dominant form of TB in future with many treatment uncertainties and substantial morbidity and mortality. Addressing these challenges requires collaborative efforts thorough sustained researches to evaluate the current treatment guidelines, particularly in high burden countries and prevent progression of resistance. To our best knowledge, there has been no research exploring the acceptability, effectiveness, and cost-effectiveness of community-based-MDR-TB treatment model in Nigeria, which is among the high burden countries. The previous similar qualitative study looks at the home-based management of MDR-TB in rural Uganda. This research aimed to explore patient’s views and acceptability of community-based-MDR-TB treatment model and to evaluate and compare the effectiveness and cost-effectiveness of community-based versus hospital-based MDR-TB treatment model of care from the Nigerian perspective. Knowledge of patient’s views and acceptability of community-based-MDR-TB treatment approach would help in designing future treatment recommendations and in health policymaking. Accordingly, knowledge of effectiveness and cost-effectiveness are part of the evidence needed to inform a decision about whether and how to scale up MDR-TB treatment, particularly in a poor resource setting with limited knowledge of TB. Mixed methods using qualitative and quantitative approach were employed. Qualitative data were obtained using in-depth semi-structured interviews with 21 MDR-TB patients in Nigeria to explore their views and acceptability of community-based MDR-TB treatment model. Qualitative data collection followed an iterative process which allowed adaptation of topic guides until data saturation. In-depth interviews were analyzed using thematic analysis. Quantitative data on treatment outcomes were obtained from medical records of MDR-TB patients to determine the effectiveness and direct and indirect costs were obtained from the patients using validated questionnaire and health system costs from the donor agencies to determine the cost-effectiveness difference between community and hospital-based model from the Nigerian perspective. Findings: Some themes have emerged from the patient’s perspectives indicating preference and high acceptability of community-based-MDR-TB treatment model by the patients and mixed feelings about the risk of MDR-TB transmission within the community due to poor infection control. The result of the modeling from the quantitative data is still on course. Community-based MDR-TB care was seen as the acceptable and most preferred model of care by the majority of the participants because of its convenience which in turn enhanced recovery, enables social interaction and offer more psychosocial benefits as well as averted productivity loss. However, there is a need to strengthen this model of care thorough enhanced strategies that ensure guidelines compliance and infection control in order to prevent the progression of resistance and curtail community transmission.

Keywords: acceptability, cost-effectiveness, multidrug-resistant TB treatment, community and hospital approach

Procedia PDF Downloads 106
215 Complex Decision Rules in Quality Assurance Processes for Quick Service Restaurant Industry: Human Factors Determining Acceptability

Authors: Brandon Takahashi, Marielle Hanley, Gerry Hanley

Abstract:

The large-scale quick-service restaurant industry is a complex business to manage optimally. With over 40 suppliers providing different ingredients for food preparation and thousands of restaurants serving over 50 unique food offerings across a wide range of regions, the company must implement a quality assurance process. Businesses want to deliver quality food efficiently, reliably, and successfully at a low cost that the public wants to buy. They also want to make sure that their food offerings are never unsafe to eat or of poor quality. A good reputation (and profitable business) developed over the years can be gone in an instant if customers fall ill eating your food. Poor quality also results in food waste, and the cost of corrective actions is compounded by the reduction in revenue. Product compliance evaluation assesses if the supplier’s ingredients are within compliance with the specifications of several attributes (physical, chemical, organoleptic) that a company will test to ensure that a quality, safe to eat food is given to the consumer and will deliver the same eating experience in all parts of the country. The technical component of the evaluation includes the chemical and physical tests that produce numerical results that relate to shelf-life, food safety, and organoleptic qualities. The psychological component of the evaluation includes organoleptic, which is acting on or involving the use of the sense organs. The rubric for product compliance evaluation has four levels: (1) Ideal: Meeting or exceeding all technical (physical and chemical), organoleptic, & psychological specifications. (2) Deviation from ideal but no impact on quality: Not meeting or exceeding some technical and organoleptic/psychological specifications without impact on consumer quality and meeting all food safety requirements (3) Acceptable: Not meeting or exceeding some technical and organoleptic/psychological specifications resulting in reduction of consumer quality but not enough to lessen demand and meeting all food safety requirements (4) Unacceptable: Not meeting food safety requirements, independent of meeting technical and organoleptic specifications or meeting all food safety requirements but product quality results in consumer rejection of food offering. Sampling of products and consumer tastings within the distribution network is a second critical element of the quality assurance process and are the data sources for the statistical analyses. Each finding is not independently assessed with the rubric. For example, the chemical data will be used to back up/support any inferences on the sensory profiles of the ingredients. Certain flavor profiles may not be as apparent when mixed with other ingredients, which leads to weighing specifications differentially in the acceptability decision. Quality assurance processes are essential to achieve that balance of quality and profitability by making sure the food is safe and tastes good but identifying and remediating product quality issues before they hit the stores. Comprehensive quality assurance procedures implement human factors methodologies, and this report provides recommendations for systemic application of quality assurance processes for quick service restaurant services. This case study will review the complex decision rubric and evaluate processes to ensure the right balance of cost, quality, and safety is achieved.

Keywords: decision making, food safety, organoleptics, product compliance, quality assurance

Procedia PDF Downloads 171
214 Characteristics of the Mortars Obtained by Radioactive Recycled Sand

Authors: Claudiu Mazilu, Ion Robu, Radu Deju

Abstract:

At the end of 2011 worldwide there were 124 power reactors shut down, from which: 16 fully decommissioned, 50 power reactors in a decommissioning process, 49 reactors in “safe enclosure mode”, 3 reactors “entombed”, for other 6 reactors it was not yet have specified the decommissioning strategy. The concrete radioactive waste that will be generated from dismantled structures of VVR-S nuclear research reactor from Magurele (e.g.: biological shield of the reactor core and hot cells) represents an estimated amount of about 70 tons. Until now the solid low activity radioactive waste (LLW) was pre-placed in containers and cementation with mortar made from cement and natural fine aggregates, providing a fill ratio of the container of approximately 50 vol. % for concrete. In this paper is presented an innovative technology in which radioactive concrete is crushed and the mortar made from recycled radioactive sand, cement, water and superplasticizer agent is poured in container with radioactive rubble (that is pre-placed in container) for cimentation. Is achieved a radioactive waste package in which the degree of filling of radioactive waste increases substantially. The tests were carried out on non-radioactive material because the radioactive concrete was not available in a good time. Waste concrete with maximum size of 350 mm were crushed in the first stage with a Liebhher type jaw crusher, adjusted to nominal size of 50 mm. Crushed concrete less than 50 mm was sieved in order to obtain useful sort for preplacement, 10 to 50 mm. The rest of the screening > 50 mm obtained from primary crushing of concrete was crushed in the second stage, with different working principles crushers at size < 2.5 mm, in order to produce recycled fine aggregate (sand) for the filler mortar and which fulfills the technical specifications proposed: –jaw crusher, Retsch type, model BB 100; –hammer crusher, Buffalo Shuttle model WA-12-H; presented a series of characteristics of recycled concrete aggregates by predefined class (the granulosity, the granule shape, the absorption of water, behavior to the Los Angeles test, the content of attached mortar etc.), most in comparison with characteristics of natural aggregates. Various mortar recipes were used in order to identify those that meet the proposed specification (flow-rate: 16-50s, no bleeding, min. 30N/mm2 compressive strength of the mortar after 28 days, the proportion of recycled sand used in mortar: min. 900kg/m3) and allow obtaining of the highest fill ratio for mortar. In order to optimize the mortars following compositional factors were varied: aggregate nature, water/cement (W/C) ratio, sand/cement (S/C) ratio, nature and proportion of additive. To confirm the results obtained on a small scale, it made an attempt to fill the mortar in a container that simulates the final storage drums. Was measured the mortar fill ratio (98.9%) compared with the results of laboratory tests and targets set out in the proposed specification. Although fill ratio obtained on the mock-up is lower by 0.8 vol. % compared to that obtained in the laboratory tests (99.7%), the result meets the specification criteria.

Keywords: characteristics, radioactive recycled concrete aggregate, mortars, fill ratio

Procedia PDF Downloads 175
213 Partially Aminated Polyacrylamide Hydrogel: A Novel Approach for Temporary Oil and Gas Well Abandonment

Authors: Hamed Movahedi, Nicolas Bovet, Henning Friis Poulsen

Abstract:

Following the advent of the Industrial Revolution, there has been a significant increase in the extraction and utilization of hydrocarbon and fossil fuel resources. However, a new era has emerged, characterized by a shift towards sustainable practices, namely the reduction of carbon emissions and the promotion of renewable energy generation. Given the substantial number of mature oil and gas wells that have been developed inside the petroleum reservoir domain, it is imperative to establish an environmental strategy and adopt appropriate measures to effectively seal and decommission these wells. In general, the cement plug serves as a material for plugging purposes. Nevertheless, there exist some scenarios in which the durability of such a plug is compromised, leading to the potential escape of hydrocarbons via fissures and fractures within cement plugs. Furthermore, cement is often not considered a practical solution for temporary plugging, particularly in the case of well sites that have the potential for future gas storage or CO2 injection. The Danish oil and gas industry has promising potential as a prospective candidate for future carbon dioxide (CO2) injection, hence contributing to the implementation of carbon capture strategies within Europe. The primary reservoir component consists of chalk, a rock characterized by limited permeability. This work focuses on the development and characterization of a novel hydrogel variant. The hydrogel is designed to be injected via a low-permeability reservoir and afterward undergoes a transformation into a high-viscosity gel. The primary objective of this research is to explore the potential of this hydrogel as a new solution for effectively plugging well flow. Initially, the synthesis of polyacrylamide was carried out using radical polymerization inside the confines of the reaction flask. Subsequently, with the application of the Hoffman rearrangement, the polymer chain undergoes partial amination, facilitating its subsequent reaction with the crosslinker and enabling the formation of a hydrogel in the subsequent stage. The organic crosslinker, glutaraldehyde, was employed in the experiment to facilitate the formation of a gel. This gel formation occurred when the polymeric solution was subjected to heat within a specified range of reservoir temperatures. Additionally, a rheological survey and gel time measurements were conducted on several polymeric solutions to determine the optimal concentration. The findings indicate that the gel duration is contingent upon the starting concentration and exhibits a range of 4 to 20 hours, hence allowing for manipulation to accommodate diverse injection strategies. Moreover, the findings indicate that the gel may be generated in environments characterized by acidity and high salinity. This property ensures the suitability of this substance for application in challenging reservoir conditions. The rheological investigation indicates that the polymeric solution exhibits the characteristics of a Herschel-Bulkley fluid with somewhat elevated yield stress prior to solidification.

Keywords: polyacrylamide, hofmann rearrangement, rheology, gel time

Procedia PDF Downloads 58
212 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 117
211 Nutrition Budgets in Uganda: Research to Inform Implementation

Authors: Alexis D'Agostino, Amanda Pomeroy

Abstract:

Background: Resource availability is essential to effective implementation of national nutrition policies. To this end, the SPRING Project has collected and analyzed budget data from government ministries in Uganda, international donors, and other nutrition implementers to provide data for the first time on what funding is actually allocated to implement nutrition activities named in the national nutrition plan. Methodology: USAID’s SPRING Project used the Uganda Nutrition Action Plan (UNAP) as the starting point for budget analysis. Thorough desk reviews of public budgets from government, donors, and NGOs were mapped to activities named in the UNAP and validated by key informants (KIs) across the stakeholder groups. By relying on nationally-recognized and locally-created documents, SPRING provided a familiar basis for discussions to increase credibility and local ownership of findings. Among other things, the KIs validated the amount, source, and type (specific or sensitive) of funding. When only high-level budget data were available, KIs provided rough estimates of the percentage of allocations that were actually nutrition-relevant, allowing creation of confidence intervals around some funding estimates. Results: After validating data and narrowing in on estimates of funding to nutrition-relevant programming, researchers applied a formula to estimate overall nutrition allocations. In line with guidance by the SUN Movement and its three-step process, nutrition-specific funding was counted at 100% of its allocation amount, while nutrition sensitive funding was counted at 25%. The vast majority of nutrition funding in Uganda is off-budget, with over 90 percent of all nutrition funding is provided outside of the government system. Overall allocations are split nearly evenly between nutrition-specific and –sensitive activities. In FY 2013/14, the two-year study’s baseline year, on- and off-budget funding for nutrition was estimated to be around 60 million USD. While the 60 million USD allocations compare favorably to the 66 million USD estimate of the cost of the UNAP, not all activities are sufficiently funded. Those activities with a focus on behavior change were the most underfunded. In addition, accompanying qualitative research suggested that donor funding for nutrition activities may shift government funding into other areas of work, making it difficult to estimate the sustainability of current nutrition investments.Conclusions: Beyond providing figures, these estimates can be used together with the qualitative results of the study to explain how and why these amounts were allocated for particular activities and not others, examine the negotiation process that occurred, and suggest options for improving the flow of finances to UNAP activities for the remainder of the policy tenure. By the end of the PBN study, several years of nutrition budget estimates will be available to compare changes in funding over time. Halfway through SPRING’s work, there is evidence that country stakeholders have begun to feel ownership over the ultimate findings and some ministries are requesting increased technical assistance in nutrition budgeting. Ultimately, these data can be used within organization to advocate for more and improved nutrition funding and to improve targeting of nutrition allocations.

Keywords: budget, nutrition, financing, scale-up

Procedia PDF Downloads 417
210 Defective Autophagy Disturbs Neural Migration and Network Activity in hiPSC-Derived Cockayne Syndrome B Disease Models

Authors: Julia Kapr, Andrea Rossi, Haribaskar Ramachandran, Marius Pollet, Ilka Egger, Selina Dangeleit, Katharina Koch, Jean Krutmann, Ellen Fritsche

Abstract:

It is widely acknowledged that animal models do not always represent human disease. Especially human brain development is difficult to model in animals due to a variety of structural and functional species-specificities. This causes significant discrepancies between predicted and apparent drug efficacies in clinical trials and their subsequent failure. Emerging alternatives based on 3D in vitro approaches, such as human brain spheres or organoids, may in the future reduce and ultimately replace animal models. Here, we present a human induced pluripotent stem cell (hiPSC)-based 3D neural in a vitro disease model for the Cockayne Syndrome B (CSB). CSB is a rare hereditary disease and is accompanied by severe neurologic defects, such as microcephaly, ataxia and intellectual disability, with currently no treatment options. Therefore, the aim of this study is to investigate the molecular and cellular defects found in neural hiPSC-derived CSB models. Understanding the underlying pathology of CSB enables the development of treatment options. The two CSB models used in this study comprise a patient-derived hiPSC line and its isogenic control as well as a CSB-deficient cell line based on a healthy hiPSC line (IMR90-4) background thereby excluding genetic background-related effects. Neurally induced and differentiated brain sphere cultures were characterized via RNA Sequencing, western blot (WB), immunocytochemistry (ICC) and multielectrode arrays (MEAs). CSB-deficiency leads to an altered gene expression of markers for autophagy, focal adhesion and neural network formation. Cell migration was significantly reduced and electrical activity was significantly increased in the disease cell lines. These data hint that the cellular pathologies is possibly underlying CSB. By induction of autophagy, the migration phenotype could be partially rescued, suggesting a crucial role of disturbed autophagy in defective neural migration of the disease lines. Altered autophagy may also lead to inefficient mitophagy. Accordingly, disease cell lines were shown to have a lower mitochondrial base activity and a higher susceptibility to mitochondrial stress induced by rotenone. Since mitochondria play an important role in neurotransmitter cycling, we suggest that defective mitochondria may lead to altered electrical activity in the disease cell lines. Failure to clear the defective mitochondria by mitophagy and thus missing initiation cues for new mitochondrial production could potentiate this problem. With our data, we aim at establishing a disease adverse outcome pathway (AOP), thereby adding to the in-depth understanding of this multi-faced disorder and subsequently contributing to alternative drug development.

Keywords: autophagy, disease modeling, in vitro, pluripotent stem cells

Procedia PDF Downloads 103
209 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks

Authors: Mazarine Roquet, Pierre Dewallef

Abstract:

The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.

Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating

Procedia PDF Downloads 54
208 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 38