Search results for: finite element models
829 Predicting Success and Failure in Drug Development Using Text Analysis
Authors: Zhi Hao Chow, Cian Mulligan, Jack Walsh, Antonio Garzon Vico, Dimitar Krastev
Abstract:
Drug development is resource-intensive, time-consuming, and increasingly expensive with each developmental stage. The success rates of drug development are also relatively low, and the resources committed are wasted with each failed candidate. As such, a reliable method of predicting the success of drug development is in demand. The hypothesis was that some examples of failed drug candidates are pushed through developmental pipelines based on false confidence and may possess common linguistic features identifiable through sentiment analysis. Here, the concept of using text analysis to discover such features in research publications and investor reports as predictors of success was explored. R studios were used to perform text mining and lexicon-based sentiment analysis to identify affective phrases and determine their frequency in each document, then using SPSS to determine the relationship between our defined variables and the accuracy of predicting outcomes. A total of 161 publications were collected and categorised into 4 groups: (i) Cancer treatment, (ii) Neurodegenerative disease treatment, (iii) Vaccines, and (iv) Others (containing all other drugs that do not fit into the 3 categories). Text analysis was then performed on each document using 2 separate datasets (BING and AFINN) in R within the category of drugs to determine the frequency of positive or negative phrases in each document. A relative positivity and negativity value were then calculated by dividing the frequency of phrases with the word count of each document. Regression analysis was then performed with SPSS statistical software on each dataset (values from using BING or AFINN dataset during text analysis) using a random selection of 61 documents to construct a model. The remaining documents were then used to determine the predictive power of the models. Model constructed from BING predicts the outcome of drug performance in clinical trials with an overall percentage of 65.3%. AFINN model had a lower accuracy at predicting outcomes compared to the BING model at 62.5% but was not effective at predicting the failure of drugs in clinical trials. Overall, the study did not show significant efficacy of the model at predicting outcomes of drugs in development. Many improvements may need to be made to later iterations of the model to sufficiently increase the accuracy.Keywords: data analysis, drug development, sentiment analysis, text-mining
Procedia PDF Downloads 160828 Towards a Vulnerability Model Assessment of The Alexandra Jukskei Catchment in South Africa
Authors: Vhuhwavho Gadisi, Rebecca Alowo, German Nkhonjera
Abstract:
This article sets out to detail an investigation of groundwater management in the Juksei Catchment of South Africa through spatial mapping of key hydrological relationships, interactions, and parameters in catchments. The Department of Water Affairs (DWA) noted gaps in the implementation of the South African National Water Act 1998: article 16, including the lack of appropriate models for dealing with water quantity parameters. For this reason, this research conducted a drastic GIS-based groundwater assessment to improve groundwater monitoring system in the Juksei River basin catchment of South Africa. The methodology employed was a mixed-methods approach/design that involved the use of DRASTIC analysis, questionnaire, literature review and observations to gather information on how to help people who use the Juskei River. GIS (geographical information system) mapping was carried out using a three-parameter DRASTIC (Depth to water, Recharge, Aquifer media, Soil media, Topography, Impact of the vadose zone, Hydraulic conductivity) vulnerability methodology. In addition, the developed vulnerability map was subjected to sensitivity analysis as a validation method. This approach included single-parameter sensitivity, sensitivity to map deletion, and correlation analysis of DRASTIC parameters. The findings were that approximately 5.7% (45km2) of the area in the northern part of the Juksei watershed is highly vulnerable. Approximately 53.6% (428.8 km^2) of the basin is also at high risk of groundwater contamination. This area is mainly located in the central, north-eastern, and western areas of the sub-basin. The medium and low vulnerability classes cover approximately 18.1% (144.8 km2) and 21.7% (168 km2) of the Jukskei River, respectively. The shallow groundwater of the Jukskei River belongs to a very vulnerable area. Sensitivity analysis indicated that water depth, water recharge, aquifer environment, soil, and topography were the main factors contributing to the vulnerability assessment. The conclusion is that the final vulnerability map indicates that the Juksei catchment is highly susceptible to pollution, and therefore, protective measures are needed for sustainable management of groundwater resources in the study area.Keywords: contamination, DRASTIC, groundwater, vulnerability, model
Procedia PDF Downloads 83827 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 333826 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams
Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche
Abstract:
According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor
Procedia PDF Downloads 319825 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor
Authors: Pranav Gulati, Isha Sharma
Abstract:
Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring
Procedia PDF Downloads 279824 Urban Logistics Dynamics: A User-Centric Approach to Traffic Modelling and Kinetic Parameter Analysis
Authors: Emilienne Lardy, Eric Ballot, Mariam Lafkihi
Abstract:
Efficient urban logistics requires a comprehensive understanding of traffic dynamics, particularly as it pertains to kinetic parameters influencing energy consumption and trip duration estimations. While real-time traffic information is increasingly accessible, current high-precision forecasting services embedded in route planning often function as opaque 'black boxes' for users. These services, typically relying on AI-processed counting data, fall short in accommodating open design parameters essential for management studies, notably within Supply Chain Management. This work revisits the modelling of traffic conditions in the context of city logistics, emphasizing its significance from the user’s point of view, with two focuses. Firstly, the focus is not on the vehicle flow but on the vehicles themselves and the impact of the traffic conditions on their driving behaviour. This means opening the range of studied indicators beyond vehicle speed, to describe extensively the kinetic and dynamic aspects of the driving behaviour. To achieve this, we leverage the Art. Kinema parameters are designed to characterize driving cycles. Secondly, this study examines how the driving context (i.e., exogenous factors to the traffic flow) determines the mentioned driving behaviour. Specifically, we explore how accurately the kinetic behaviour of a vehicle can be predicted based on a limited set of exogenous factors, such as time, day, road type, orientation, slope, and weather conditions. To answer this question, statistical analysis was conducted on real-world driving data, which includes high-frequency measurements of vehicle speed. A Factor Analysis and a Generalized Linear Model have been established to link kinetic parameters with independent categorical contextual variables. The results include an assessment of the adjustment quality and the robustness of the models, as well as an overview of the model’s outputs.Keywords: factor analysis, generalised linear model, real world driving data, traffic congestion, urban logistics, vehicle kinematics
Procedia PDF Downloads 67823 Self-Inflating Soft Tissue Expander Outcome for Alveolar Ridge Augmentation a Randomized Controlled Clinical and Histological Study
Authors: Alaa T. Ali, Nevine H. Kheir El Din, Ehab S. Abdelhamid, Ahmed E. Amr
Abstract:
Objective: Severe alveolar bone resorption is usually associated with a deficient amount of soft tissues. soft tissue expansion is introduced to provide an adequate amount of soft tissue over the grafted area. This study aimed to assess the efficacy of sub-periosteal self-inflating osmotic tissue expanders used as preparatory surgery before horizontal alveolar ridge augmentation using autogenous onlay block bone graft. Methods: A prospective randomized controlled clinical trial was performed. Sixteen partially edentulous patients demanding horizontal bone augmentation in the anterior maxilla were randomly assigned to horizontal ridge augmentation with autogenous bone block grafts harvested from the mandibular symphysis. For the test group, soft tissue expanders were placed sub-periosteally before horizontal ridge augmentation. Impressions were taken before and after STE, and the cast models were optically scanned and superimposed to be used for volumetric analysis. Horizontal ridge augmentation was carried out after STE completion. For the control group, a periosteal releasing incision was performed during bone augmentation procedures. Implants were placed in both groups at re-entry surgery after six months period. A core biopsy was taken. Histomorphometric assessment for newly formed bone surface area, mature collagen area fraction, the osteoblasts count, and blood vessel count were performed. The change in alveolar ridge width was evaluated through bone caliper and CBCT. Results: Soft tissue expander successfully provides a Surplus amount of soft tissues in 5 out of 8 patients in the test group. Complications during the expansion period were perforation through oral mucosa occurred in two patients. Infection occurred in one patient. The mean soft tissue volume gain was 393.9 ± 322mm. After 6 months. The mean horizontal bone gains for the test and control groups were 3.14 mm and 3.69 mm, respectively. Conclusion: STE with a sub-periosteal approach is an applicable method to achieve an additional soft tissue and to reduce bone block graft exposure and wound dehiscence.Keywords: soft tissue expander, ridge augmentation, block graft, symphysis bone block
Procedia PDF Downloads 126822 Nursing Students’ Opinions about Theoretical Lessons and Clinical Area: A Survey in a Nursing Department
Authors: Ergin Toros, Manar Aslan
Abstract:
This study was planned as a descriptive study in order to learn the opinions of the students who are studying in nursing undergraduate program about their theoretical/practical lessons and departments. The education in the undergraduate nursing programs has great importance because it contains the knowledge and skills to prepare student nurses to the clinic in the future. In order to provide quality-nursing services in the future, the quality of nursing education should be measured, and opinions of student nurses about education should be taken. The research population was composed of students educated in a university with 1-4 years of theoretical and clinical education (N=550), and the sample was composed of 460 students that accepted to take part in the study. It was reached to 83.6% of target population. Data collected through a survey developed by the researchers. Survey consists of 48 questions about sociodemographic characteristics (9 questions), theoretical courses (9 questions), laboratory applications (7 questions), clinical education (14 questions) and services provided by the faculty (9 questions). It was determined that 83.3% of the nursing students found the nursing profession to be suitable for them, 53% of them selected nursing because of easy job opportunity, and 48.9% of them stayed in state dormitory. Regarding the theoretical courses, 84.6% of the students were determined to agree that the question ‘Course schedule is prepared before the course and published on the university web page.’ 28.7% of them were determined to do not agree that the question ‘Feedback is given to students about the assignments they prepare.’. It has been determined that 41,5% of the students agreed that ‘The time allocated to laboratory applications is sufficient.’ Students said that physical conditions in laboratory (41,5%), and the materials used are insufficient (44.6%), and ‘The number of students in the group is not appropriate for laboratory applications.’ (45.2%). 71.3% of the students think that the nurses view in the clinics the students as a tool to remove the workload, 40.7% of them reported that nurses in the clinic area did not help through the purposes of the course, 39.6% of them said that nurses' communication with students is not good. 37.8% of students stated that nurses did not provide orientation to students, 37.2% of them think that nurses are not role models for students. 53.7% of the students stated that the incentive and support for the student exchange program were insufficient., %48 of the students think that career planning services, %47.2 security services,%45.4 the advisor spent time with students are not enough. It has been determined that nursing students are most disturbed by the approach of the nurses in the clinical area within the undergraduate education program. The clinical area education which is considered as an integral part of nursing education is important and affect to student satisfaction.Keywords: nursing education, student, clinical area, opinion
Procedia PDF Downloads 176821 Analysis of Structural Modeling on Digital English Learning Strategy Use
Authors: Gyoomi Kim, Jiyoung Bae
Abstract:
The purpose of this study was to propose a framework that verifies the structural relationships among students’ use of digital English learning strategy (DELS), affective domains, and their individual variables. The study developed a hypothetical model based on previous studies on language learning strategy use as well as digital language learning. The participants were 720 Korean high school students and 430 university students. The instrument was a self-response questionnaire that contained 70 question items based on Oxford’s SILL (Strategy Inventory for Language Learning) as well as the previous studies on language learning strategies in digital learning environment in order to measure DELS and affective domains. The collected data were analyzed through structural equation modeling (SEM). This study used quantitative data analysis procedures: Explanatory factor analysis (EFA) and confirmatory factor analysis (CFA). Firstly, the EFA was conducted in order to verify the hypothetical model; the factor analysis was conducted preferentially to identify the underlying relationships between measured variables of DELS and the affective domain in the EFA process. The hypothetical model was established with six indicators of learning strategies (memory, cognitive, compensation, metacognitive, affective, and social strategies) under the latent variable of the use of DELS. In addition, the model included four indicators (self-confidence, interests, self-regulation, and attitude toward digital learning) under the latent variable of learners’ affective domain. Secondly, the CFA was used to determine the suitability of data and research models, so all data from the present study was used to assess model fits. Lastly, the model also included individual learner factors as covariates and five constructs selected were learners’ gender, the level of English proficiency, the duration of English learning, the period of using digital devices, and previous experience of digital English learning. The results verified from SEM analysis proposed a theoretical model that showed the structural relationships between Korean students’ use of DELS and their affective domains. Therefore, the results of this study help ESL/EFL teachers understand how learners use and develop appropriate learning strategies in digital learning contexts. The pedagogical implication and suggestions for the further study will be also presented.Keywords: Digital English Learning Strategy, DELS, individual variables, learners' affective domains, Structural Equation Modeling, SEM
Procedia PDF Downloads 125820 Inhibition Theory: The Development of Subjective Happiness and Life Satisfaction after Experiencing Severe, Traumatic Life Events (Paraplegia)
Authors: Tanja Ecken, Laura Fricke, Anika Steger, Maren M. Michaelsen, Tobias Esch
Abstract:
Studies and applied experiences evidence severe and traumatic accidents to not only require physical rehabilitation and recovery but also to necessitate a psychological adaption and reorganization to the changed living conditions. Neurobiological models underpinning the experience of happiness and satisfaction postulate life shocks to potentially enhance the experience of happiness and life satisfaction, i.e., posttraumatic growth (PTG). This present study aims to provide an in-depth understanding of the underlying psychological processes of PTG and to outline its consequences on subjective happiness and life satisfaction. To explore the aforementioned, Esch’s (2022) ABC Model was used as guidance for the development of a questionnaire assessing changes in happiness and life satisfaction and for a schematic model postulating the development of PTG in the context of paraplegia. Two-stage qualitative interview procedures explored participants’ experiences of paraplegia. Specifically, narrative, semi-structured interviews (N=28) focused on the time before and after the accident, the availability of supportive resources, and potential changes in the perception of happiness and life satisfaction. Qualitative analysis (Grounded Theory) indicated an initial phase of reorganization was followed by a gradual psychological adaption to novel, albeit reduced, opportunities in life. Participants reportedly experienced a ‘compelled’ slowing down and elements of mindfulness, subsequently instilling a sense of gratitude and joy in relation to life’s presumed trivialities. Despite physical limitations and difficulties, participants reported an enhanced ability to relate to oneself and others and a reduction of perceived every day nuisances. Concluding, PTG can be experienced in response to severe, traumatic life events and has the potential to enrich the lives of affected persons in numerous, unexpected and yet challenging ways. PTG appears to be spectrum comprised of an interplay of internal and external resources underpinned by neurobiological processes. Participants experienced PTG irrelevant of age, gender, marital status, income or level of education.Keywords: inhibition theory, posttraumatic growth, trauma, stress, life satisfaction, subjective happiness, traumatic life events, paraplegia
Procedia PDF Downloads 86819 Berberine Ameliorates Glucocorticoid-Induced Hyperglycemia: An In-Vitro and In-Vivo Study
Authors: Mrinal Gupta, Mohammad Rumman, Babita Singh Abbas Ali Mahdi, Shivani Pandey
Abstract:
Introduction: Berberine (BBR), a bioactive compound isolated from Coptidis Rhizoma, possesses diverse pharmacological activities, including anti-bacterial, anti-inflammatory, antitumor, hypolipidemic, and anti-diabetic. However, its role as an anti-diabetic agent in animal models of dexamethasone (Dex)-induced diabetes remains unknown. Studies have shown that natural compounds, including aloe, caper, cinnamon, cocoa, green and black tea, and turmeric, can be used for treating Type 2 diabetes mellitus (DM). Compared to conventional drugs, natural compounds have fewer side effects and are easily available. Herein, we studied the anti-diabetic effects of BBR in a mice model of Dex-induced diabetes. Methods: HepG2 cell line was used for glucose release and glycogen synthesis studies. Cell proliferation was measured by methylthiotetrazole (MTT) assay. For animal studies, mice were treated with Dex (2 mg/kg, i.m.) for 30 days and the effect of BBR at the doses 100, 200, and 500 mg/kg (p.o.) was analyzed. Glucose, insulin, and pyruvate tests were performed to evaluate the development of the diabetic model. An echo MRI was performed to assess the fat mass. Further, to elucidate the mechanism of action of BBR, mRNA expression of genes regulating gluconeogenesis, glucose uptake, and glycolysis were analyzed. Results: In vitro BBR had no impact on cell viability up to a concentration of 50μM. Moreover, BBR suppressed the hepatic glucose release and improved glucose tolerance in HepG2 cells. In vivo, BBR improved glucose homeostasis in diabetic mice, as evidenced by enhanced glucose clearance, increased glycolysis, elevated glucose uptake, and decreased gluconeogenesis. Further, Dex treatment increased the total fat mass in mice, which was ameliorated by BBR treatment. Conclusion: BBR improves glucose tolerance by increasing glucose clearance, inhibiting hepatic glucose release, and decreasing obesity. Thus, BBR may become a potential therapeutic agent for treating glucocorticoid-induced diabetes and obesity in the future.Keywords: glucocorticoid, hyperglycemia, berberine, HepG2 cells, insulin resistance, glucose
Procedia PDF Downloads 64818 Cost Overruns in Mega Projects: Project Progress Prediction with Probabilistic Methods
Authors: Yasaman Ashrafi, Stephen Kajewski, Annastiina Silvennoinen, Madhav Nepal
Abstract:
Mega projects either in construction, urban development or energy sectors are one of the key drivers that build the foundation of wealth and modern civilizations in regions and nations. Such projects require economic justification and substantial capital investment, often derived from individual and corporate investors as well as governments. Cost overruns and time delays in these mega projects demands a new approach to more accurately predict project costs and establish realistic financial plans. The significance of this paper is that the cost efficiency of megaprojects will improve and decrease cost overruns. This research will assist Project Managers (PMs) to make timely and appropriate decisions about both cost and outcomes of ongoing projects. This research, therefore, examines the oil and gas industry where most mega projects apply the classic methods of Cost Performance Index (CPI) and Schedule Performance Index (SPI) and rely on project data to forecast cost and time. Because these projects are always overrun in cost and time even at the early phase of the project, the probabilistic methods of Monte Carlo Simulation (MCS) and Bayesian Adaptive Forecasting method were used to predict project cost at completion of projects. The current theoretical and mathematical models which forecast the total expected cost and project completion date, during the execution phase of an ongoing project will be evaluated. Earned Value Management (EVM) method is unable to predict cost at completion of a project accurately due to the lack of enough detailed project information especially in the early phase of the project. During the project execution phase, the Bayesian adaptive forecasting method incorporates predictions into the actual performance data from earned value management and revises pre-project cost estimates, making full use of the available information. The outcome of this research is to improve the accuracy of both cost prediction and final duration. This research will provide a warning method to identify when current project performance deviates from planned performance and crates an unacceptable gap between preliminary planning and actual performance. This warning method will support project managers to take corrective actions on time.Keywords: cost forecasting, earned value management, project control, project management, risk analysis, simulation
Procedia PDF Downloads 406817 Influence of Deficient Materials on the Reliability of Reinforced Concrete Members
Authors: Sami W. Tabsh
Abstract:
The strength of reinforced concrete depends on the member dimensions and material properties. The properties of concrete and steel materials are not constant but random variables. The variability of concrete strength is due to batching errors, variations in mixing, cement quality uncertainties, differences in the degree of compaction and disparity in curing. Similarly, the variability of steel strength is attributed to the manufacturing process, rolling conditions, characteristics of base material, uncertainties in chemical composition, and the microstructure-property relationships. To account for such uncertainties, codes of practice for reinforced concrete design impose resistance factors to ensure structural reliability over the useful life of the structure. In this investigation, the effects of reductions in concrete and reinforcing steel strengths from the nominal values, beyond those accounted for in the structural design codes, on the structural reliability are assessed. The considered limit states are flexure, shear and axial compression based on the ACI 318-11 structural concrete building code. Structural safety is measured in terms of a reliability index. Probabilistic resistance and load models are compiled from the available literature. The study showed that there is a wide variation in the reliability index for reinforced concrete members designed for flexure, shear or axial compression, especially when the live-to-dead load ratio is low. Furthermore, variations in concrete strength have minor effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and sever effect on the reliability of columns in axial compression. On the other hand, changes in steel yield strength have great effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and mild effect on the reliability of columns in axial compression. Based on the outcome, it can be concluded that the reliability of beams is sensitive to changes in the yield strength of the steel reinforcement, whereas the reliability of columns is sensitive to variations in the concrete strength. Since the embedded target reliability in structural design codes results in lower structural safety in beams than in columns, large reductions in material strengths compromise the structural safety of beams much more than they affect columns.Keywords: code, flexure, limit states, random variables, reinforced concrete, reliability, reliability index, shear, structural safety
Procedia PDF Downloads 430816 Climate Changes Impact on Artificial Wetlands
Authors: Carla Idely Palencia-Aguilar
Abstract:
Artificial wetlands play an important role at Guasca Municipality in Colombia, not only because they are used for the agroindustry, but also because more than 45 species were found, some of which are endemic and migratory birds. Remote sensing was used to determine the changes in the area occupied by water of artificial wetlands by means of Aster and Modis images for different time periods. Evapotranspiration was also determined by three methods: Surface Energy Balance System-Su (SEBS) algorithm, Surface Energy Balance- Bastiaanssen (SEBAL) algorithm, and Potential Evapotranspiration- FAO. Empirical equations were also developed to determine the relationship between Normalized Difference Vegetation Index (NDVI) versus net radiation, ambient temperature and rain with an obtained R2 of 0.83. Groundwater level fluctuations on a daily basis were studied as well. Data from a piezometer placed next to the wetland were fitted with rain changes (with two weather stations located at the proximities of the wetlands) by means of multiple regression and time series analysis, the R2 from the calculated and measured values resulted was higher than 0.98. Information from nearby weather stations provided information for ordinary kriging as well as the results for the Digital Elevation Model (DEM) developed by using PCI software. Standard models (exponential, spherical, circular, gaussian, linear) to describe spatial variation were tested. Ordinary Cokriging between height and rain variables were also tested, to determine if the accuracy of the interpolation would increase. The results showed no significant differences giving the fact that the mean result of the spherical function for the rain samples after ordinary kriging was 58.06 and a standard deviation of 18.06. The cokriging using for the variable rain, a spherical function; for height variable, the power function and for the cross variable (rain and height), the spherical function had a mean of 57.58 and a standard deviation of 18.36. Threatens of eutrophication were also studied, given the unconsciousness of neighbours and government deficiency. Water quality was determined over the years; different parameters were studied to determine the chemical characteristics of water. In addition, 600 pesticides were studied by gas and liquid chromatography. Results showed that coliforms, nitrogen, phosphorous and prochloraz were the most significant contaminants.Keywords: DEM, evapotranspiration, geostatistics, NDVI
Procedia PDF Downloads 121815 The Effect of Green Power Trading Mechanism on Interregional Power Generation and Transmission in China
Authors: Yan-Shen Yang, Bai-Chen Xie
Abstract:
Background and significance of the study: Both green power trading schemes and interregional power transmission are effective ways to increase green power absorption and achieve renewable power development goals. China accelerates the construction of interregional power transmission lines and the green power market. A critical issue focusing on the close interaction between these two approaches arises, which can heavily affect the green power quota allocation and renewable power development. Existing studies have not discussed this issue adequately, so it is urgent to figure out their relationship to achieve a suitable power market design and a more reasonable power grid construction.Basic methodologies: We develop an equilibrium model of the power market in China to analyze the coupling effect of these two approaches as well as their influence on power generation and interregional transmission in China. Our model considers both the Tradable green certificate (TGC) and green power market, which consists of producers, consumers in the market, and an independent system operator (ISO) minimizing the total system cost. Our equilibrium model includes the decision optimization process of each participant. To reformulate the models presented as a single-level one, we replace the producer, consumer, ISO, and market equilibrium problems with their Karush-Kuhn-Tucker (KKT) conditions, which is further reformulated as a mixed-integer linear programming (MILP) and solved in Gurobi solver. Major findings: The result shows that: (1) the green power market can significantly promote renewable power absorption while the TGC market provides a more flexible way for green power trading. (2) The phenomena of inefficient occupation and no available transmission lines appear simultaneously. The existing interregional transmission lines cannot fully meet the demand for wind and solar PV power trading in some areas while the situation is vice versa in other areas. (3) Synchronous implementation of green power and TGC trading mechanism can benefit the development of green power as well as interregional power transmission. (4) The green power transaction exacerbates the unfair distribution of carbon emissions. The Carbon Gini Coefficient is up to 0.323 under the green power market which shows a high Carbon inequality. The eastern coastal region will benefit the most due to its huge demand for external power.Keywords: green power market, tradable green certificate, interregional power transmission, power market equilibrium model
Procedia PDF Downloads 149814 Lock in, Lock Out: A Double Lens Analysis of Local Media Paywall Strategies and User Response
Authors: Mona Solvoll, Ragnhild Kr. Olsen
Abstract:
Background and significance of the study: Newspapers are going through radical changes with increased competition, eroding readerships and declining advertising resulting in plummeting overall revenues. This has lead to a quest for new business models, focusing on monetizing content. This research paper investigates both how local online newspapers have introduced user payment and how the audience has received these changes. Given the role of local media in keeping their communities informed and those in power accountable, their potential impact on civic engagement and cultural integration in local communities, the business model innovations of local media deserves far more research interest. Empirically, the findings are interesting for local journalists, local media managers as well as local advertisers. Basic methodologies: The study is based on interviews with commercial leaders in 20 Norwegian local newspapers in addition to a national survey data from 1600 respondents among local media users. The interviews were conducted in the second half of 2015, while the survey was conducted in September 2016. Theoretically, the study draws on the business model framework. Findings: The analysis indicates that paywalls aim more at reducing digital cannibalisation of print revenue than about creating new digital income. The newspapers are mostly concerned with retaining “old” print subscribers and transform them into digital subscribers. However, this strategy may come at a high price for newspapers if their defensive print strategy drives away younger digital readership and hamper their recruitment potential for new audiences as some previous studies have indicated. Analysis of young reader news habits indicates that attracting the younger audience to traditional local news providers is particularly challenging and that they are more prone to seek alternative news sources than the older audience is. Conclusion: The paywall strategy applied by the local newspapers may be well fitted to stabilise print subscription figures and facilitate more tailored and better services for already existing customers, but far less suited for attracting new ones. The paywall is a short-sighted strategy, which drives away younger readers and paves the road for substitute offerings, particularly Facebook.Keywords: business model, newspapers, paywall, user payment
Procedia PDF Downloads 277813 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth in Patients with Lymph Nodes Metastases
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This paper is devoted to mathematical modelling of the progression and stages of breast cancer. We propose Consolidated mathematical growth model of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases (CoM-III) as a new research tool. We are interested in: 1) modelling the whole natural history of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; 2) developing adequate and precise CoM-III which reflects relations between primary tumor and secondary distant metastases; 3) analyzing the CoM-III scope of application; 4) implementing the model as a software tool. Firstly, the CoM-III includes exponential tumor growth model as a system of determinate nonlinear and linear equations. Secondly, mathematical model corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for secondary distant metastases growth in patients with lymph nodes metastases; 3) ‘visible period’ for secondary distant metastases growth in patients with lymph nodes metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-III model and predictive software: a) detect different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; b) make forecast of the period of the distant metastases appearance in patients with lymph nodes metastases; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoM-III: the number of doublings for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases. The CoM-III enables, for the first time, to predict the whole natural history of primary tumor and secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-III describes correctly primary tumor and secondary distant metastases growth of IA, IIA, IIB, IIIB (T1-4N1-3M0) stages in patients with lymph nodes metastases (N1-3); b) facilitates the understanding of the appearance period and inception of secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, primary tumor, secondary metastases, survival
Procedia PDF Downloads 302812 Analysis and Design of Exo-Skeleton System Based on Multibody Dynamics
Authors: Jatin Gupta, Bishakh Bhattacharya
Abstract:
With the aging process, many people start suffering from the problem of weak limbs resulting in mobility disorders and loss of sensory and motor function of limbs. Wearable robotic devices are viable solutions to help people suffering from these issues by augmenting their strength. These robotic devices, popularly known as exoskeletons aides user by providing external power and controlling the dynamics so as to achieve desired motion. Present work studies a simplified dynamic model of the human gait. A four link open chain kinematic model is developed to describe the dynamics of Single Support Phase (SSP) of the human gait cycle. The dynamic model is developed integrating mathematical models of the motion of inverted and triple pendulums. Stance leg is modeled as inverted pendulum having single degree of freedom and swing leg as triple pendulum having three degrees of freedom viz. thigh, knee, and ankle joints. The kinematic model is formulated using forward kinematics approach. Lagrangian approach is used to formulate governing dynamic equation of the model. For a system of nonlinear differential equations, numerical method is employed to obtain system response. Reference trajectory is generated using human body simulator, LifeMOD. For optimal mechanical design and controller design of exoskeleton system, it is imperative to study parameter sensitivity of the system. Six different parameters viz. thigh, shank, and foot masses and lengths are varied from 85% to 115% of the original value for the present work. It is observed that hip joint of swing leg is the most sensitive and ankle joint of swing leg is the least sensitive one. Changing link lengths causes more deviation in system response than link masses. Also, shank length and thigh mass are most sensitive parameters. Finally, the present study gives an insight on different factors that should be considered while designing a lower extremity exoskeleton.Keywords: lower limb exoskeleton, multibody dynamics, energy based formulation, optimal design
Procedia PDF Downloads 202811 3D Interactions in Under Water Acoustic Simulationseffect of Green Synthesized Metal Nanoparticles on Gene Expression in an In-Vitro Model of Non-alcoholic Steatohepatitis
Authors: Nendouvhada Livhuwani Portia, Nicole Sibuyi, Kwazikwakhe Gabuza, Adewale Fadaka
Abstract:
Metabolic dysfunction-associated liver disease (MASLD) is a chronic condition characterized by excessive fat accumulation in the liver, distinct from conditions caused by alcohol, viral hepatitis, or medications. MASLD is often linked with metabolic syndrome, including obesity, diabetes, hyperlipidemia, and hypertriglyceridemia. This disease can progress to metabolic dysfunction-associated steatohepatitis (MASH), marked by liver inflammation and scarring, potentially leading to cirrhosis. However, only 43-44% of patients with steatosis develop MASH, and 7-30% of those with MASH progress to cirrhosis. The exact mechanisms underlying MASLD and its progression remain unclear, and there are currently no specific therapeutic strategies for MASLD/MASH. While anti-obesity and anti-diabetic medications can reduce progression, they do not fully treat or reverse the disease. As an alternative, green-synthesized metal nanoparticles (MNPs) are emerging as potential treatments for liver diseases due to their anti-diabetic, anti-inflammatory, and anti-obesity properties with minimal side effects. MNPs like gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs) have been shown to improve metabolic processes by lowering blood glucose, body fat, and inflammation. The study aimed to explore the effects of green-synthesized MNPs on gene expression in an in vitro model of MASH using C3A/HepG2 liver cells. The MASH model was created by exposing these cells to free fatty acids (FFAs) followed by lipopolysaccharide (LPS) to induce inflammation. Cell viability was assessed with the Water-Soluble Tetrazolium (WST)-1 assay, and lipid accumulation was measured using the Oil Red O (ORO) assay. Additionally, mitochondrial membrane potential was assessed by the tetramethyl rhodamine, methyl ester (TMRE) assay, and inflammation was measured with an Enzyme-Linked Immunosorbent Assay (ELISA). The study synthesized AuNPs from Carpobrotus edulis fruit (CeF) and avocado seed (AvoSE) and AgNPs from Salvia africana-lutea (SAL) using optimized conditions. The MNPs were characterized by UV-Vis spectrophotometry and Dynamic Light Scattering (DLS). The nanoparticles were tested at various concentrations for their impact on the C3A/HepG2-induced MASH model. Among the MNPs tested, AvoSE-AuNPs showed the most promise. They reduced cell proliferation and intracellular lipid content more effectively than CeFE-AuNPs and SAL-AgNPs. Molecular analysis using real-time polymerase chain reaction revealed that AvoSE-AuNPs could potentially reverse MASH effects by reducing the expression of key pro-inflammatory and metabolic genes, including tumor necrosis factor-alpha (TNF-α), Fas cell surface death receptor (FAS), Peroxisome proliferator-activated receptor (PPAR)-α, PPAR-γ, and Sterol regulatory element-binding protein (SREBPF)-1. Further research is needed to confirm the molecular mechanisms behind the effects of these MNPs and to identify the specific phytochemicals responsible for their synthesis and bioactivities.Keywords: gold nanoparticles, green nanotechnology, metal nanoparticles, obesity
Procedia PDF Downloads 28810 Estimation of Hydrogen Production from PWR Spent Fuel Due to Alpha Radiolysis
Authors: Sivakumar Kottapalli, Abdesselam Abdelouas, Christoph Hartnack
Abstract:
Spent nuclear fuel generates a mixed field of ionizing radiation to the water. This radiation field is generally dominated by gamma rays and a limited flux of fast neutrons. The fuel cladding effectively attenuates beta and alpha particle radiation. Small fraction of the spent nuclear fuel exhibits some degree of fuel cladding penetration due to pitting corrosion and mechanical failure. Breaches in the fuel cladding allow the exposure of small volumes of water in the cask to alpha and beta ionizing radiation. The safety of the transport of radioactive material is assured by the package complying with the IAEA Requirements for the Safe Transport of Radioactive Material SSR-6. It is of high interest to avoid generation of hydrogen inside the cavity which may to an explosive mixture. The risk of hydrogen production along with other radiation gases should be analyzed for a typical spent fuel for safety issues. This work aims to perform a realistic study of the production of hydrogen by radiolysis assuming most penalizing initial conditions. It consists in the calculation of the radionuclide inventory of a pellet taking into account the burn up and decays. Westinghouse 17X17 PWR fuel has been chosen and data has been analyzed for different sets of enrichment, burnup, cycles of irradiation and storage conditions. The inventory is calculated as the entry point for the simulation studies of hydrogen production by radiolysis kinetic models by MAKSIMA-CHEMIST. Dose rates decrease strongly within ~45 μm from the fuel surface towards the solution(water) in case of alpha radiation, while the dose rate decrease is lower in case of beta and even slower in case of gamma radiation. Calculations are carried out to obtain spectra as a function of time. Radiation dose rate profiles are taken as the input data for the iterative calculations. Hydrogen yield has been found to be around 0.02 mol/L. Calculations have been performed for a realistic scenario considering a capsule containing the spent fuel rod. Thus, hydrogen yield has been debated. Experiments are under progress to validate the hydrogen production rate using cyclotron at > 5MeV (at ARRONAX, Nantes).Keywords: radiolysis, spent fuel, hydrogen, cyclotron
Procedia PDF Downloads 521809 Low Energy Technology for Leachate Valorisation
Authors: Jesús M. Martín, Francisco Corona, Dolores Hidalgo
Abstract:
Landfills present long-term threats to soil, air, groundwater and surface water due to the formation of greenhouse gases (methane gas and carbon dioxide) and leachate from decomposing garbage. The composition of leachate differs from site to site and also within the landfill. The leachates alter with time (from weeks to years) since the landfilled waste is biologically highly active and their composition varies. Mainly, the composition of the leachate depends on factors such as characteristics of the waste, the moisture content, climatic conditions, degree of compaction and the age of the landfill. Therefore, the leachate composition cannot be generalized and the traditional treatment models should be adapted in each case. Although leachate composition is highly variable, what different leachates have in common is hazardous constituents and their potential eco-toxicological effects on human health and on terrestrial ecosystems. Since leachate has distinct compositions, each landfill or dumping site would represent a different type of risk on its environment. Nevertheless, leachates consist always of high organic concentration, conductivity, heavy metals and ammonia nitrogen. Leachate could affect the current and future quality of water bodies due to uncontrolled infiltrations. Therefore, control and treatment of leachate is one of the biggest issues in urban solid waste treatment plants and landfills design and management. This work presents a treatment model that will be carried out "in-situ" using a cost-effective novel technology that combines solar evaporation/condensation plus forward osmosis. The plant is powered by renewable energies (solar energy, biomass and residual heat), which will minimize the carbon footprint of the process. The final effluent quality is very high, allowing reuse (preferred) or discharge into watercourses. In the particular case of this work, the final effluents will be reused for cleaning and gardening purposes. A minority semi-solid residual stream is also generated in the process. Due to its special composition (rich in metals and inorganic elements), this stream will be valorized in ceramic industries to improve the final products characteristics.Keywords: forward osmosis, landfills, leachate valorization, solar evaporation
Procedia PDF Downloads 204808 Bionaut™: A Breakthrough Robotic Microdevice to Treat Non-Communicating Hydrocephalus in Both Adult and Pediatric Patients
Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher
Abstract:
Bionaut Labs, LLC is developing a minimally invasive robotic microdevice designed to treat non-communicating hydrocephalus in both adult and pediatric patients. The device utilizes biocompatible microsurgical particles (Bionaut™) that are specifically designed to safely and reliably perform accurate fenestration(s) in the 3rd ventricle, aqueduct of Sylvius, and/or trapped intraventricular cysts of the brain in order to re-establish normal cerebrospinal fluid flow dynamics and thereby balance and/or normalize intra/intercompartmental pressure. The Bionaut™ is navigated to the target via CSF or brain tissue in a minimally invasive fashion with precise control using real-time imaging. Upon reaching the pre-defined anatomical target, the external driver allows for directing the specific microsurgical action defined to achieve the surgical goal. Notable features of the proposed protocol are i) Bionaut™ access to the intraventricular target follows a clinically validated endoscopy trajectory which may not be feasible via ‘traditional’ rigid endoscopy: ii) the treatment is microsurgical, there are no foreign materials left behind post-procedure; iii) Bionaut™ is an untethered device that is navigated through the subarachnoid and intraventricular compartments of the brain, following pre-designated non-linear trajectories as determined by the safest anatomical and physiological path; iv) Overall protocol involves minimally invasive delivery and post-operational retrieval of the surgical Bionaut™. The approach is expected to be suitable to treat pediatric patients 0-12 months old as well as adult patients with obstructive hydrocephalus who fail traditional shunts or are eligible for endoscopy. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.Keywords: Bionaut™, cerebrospinal fluid, CSF, fenestration, hydrocephalus, micro-robot, microsurgery
Procedia PDF Downloads 172807 A Local Tensor Clustering Algorithm to Annotate Uncharacterized Genes with Many Biological Networks
Authors: Paul Shize Li, Frank Alber
Abstract:
A fundamental task of clinical genomics is to unravel the functions of genes and their associations with disorders. Although experimental biology has made efforts to discover and elucidate the molecular mechanisms of individual genes in the past decades, still about 40% of human genes have unknown functions, not to mention the diseases they may be related to. For those biologists who are interested in a particular gene with unknown functions, a powerful computational method tailored for inferring the functions and disease relevance of uncharacterized genes is strongly needed. Studies have shown that genes strongly linked to each other in multiple biological networks are more likely to have similar functions. This indicates that the densely connected subgraphs in multiple biological networks are useful in the functional and phenotypic annotation of uncharacterized genes. Therefore, in this work, we have developed an integrative network approach to identify the frequent local clusters, which are defined as those densely connected subgraphs that frequently occur in multiple biological networks and consist of the query gene that has few or no disease or function annotations. This is a local clustering algorithm that models multiple biological networks sharing the same gene set as a three-dimensional matrix, the so-called tensor, and employs the tensor-based optimization method to efficiently find the frequent local clusters. Specifically, massive public gene expression data sets that comprehensively cover dynamic, physiological, and environmental conditions are used to generate hundreds of gene co-expression networks. By integrating these gene co-expression networks, for a given uncharacterized gene that is of biologist’s interest, the proposed method can be applied to identify the frequent local clusters that consist of this uncharacterized gene. Finally, those frequent local clusters are used for function and disease annotation of this uncharacterized gene. This local tensor clustering algorithm outperformed the competing tensor-based algorithm in both module discovery and running time. We also demonstrated the use of the proposed method on real data of hundreds of gene co-expression data and showed that it can comprehensively characterize the query gene. Therefore, this study provides a new tool for annotating the uncharacterized genes and has great potential to assist clinical genomic diagnostics.Keywords: local tensor clustering, query gene, gene co-expression network, gene annotation
Procedia PDF Downloads 170806 How Childhood Trauma Changes the Recovery Models
Authors: John Michael Weber
Abstract:
The following research results spanned six months and 175 people addicted to some form of substance, from alcohol to heroin. One question was asked, and the answers were amazing and consistent. The following work is the detailed results of this writer’s answer to his own question and the 175 that followed. A constant pattern took shape throughout the bio-psycho-social assessments, these addicts had “first memories,” the memories were vivid and took place between the ages of three to six years old, to a person those first memories were traumatic. This writer’s personal search into his childhood was not to find an excuse for the way he became, but to explain the reason for becoming an addict. To treat addiction, these memories that have caused Post Traumatic Stress Disorder (PTSD), must be recognized as the catalyst that sparked a predisposition. Cognitive Behavioral Therapy (CBT), integrated with treatment specifically focused on PTSD, gives the addict a better chance at recovery sans relapse. This paper seeks to give the findings of first memories of the addicts assessed and provide the best treatment plan for such an addict, considering, the childhood trauma in congruence with treatment of the Substance Use Disorder (SUD). The question posed was concerning what their first life memory wa It is the hope of this author to take the knowledge that trauma is one of the main catalysts for addiction, will allow therapists to provide better treatment and reduce relapse from abstinence from drugs and alcohol. This research led this author to believe that if treatment of childhood trauma is not a priority, the twelve steps of Alcoholics Anonymous, specifically steps 4 and 5, will not be thoroughly addressed and odds for relapse increase. With this knowledge, parents can be educated on childhood trauma and the effect it has on their children. Parents could be mindful of the fact that the things they perceive as traumatic, do not match what a child, in the developmental years, absorbs as traumatic. It is this author’s belief that what has become the status quo in treatment facilities has not been working for a long time. It is for that reason this author believes things need to change. Relapse has been woven into the fabric of standard operating procedure and that, in this authors view, is not necessary. Childhood Trauma is not being addressed early in recovery and that creates an environment of inevitable relapse. This paper will explore how to break away from the status -quo and rethink the current “evidencebased treatments.” To begin breaking away from status-quo, this ends the Abstract, with hopes an interest has been peaked to read on.Keywords: childood, trauma, treatment, addiction, change
Procedia PDF Downloads 80805 Developing Allometric Equations for More Accurate Aboveground Biomass and Carbon Estimation in Secondary Evergreen Forests, Thailand
Authors: Titinan Pothong, Prasit Wangpakapattanawong, Stephen Elliott
Abstract:
Shifting cultivation is an indigenous agricultural practice among upland people and has long been one of the major land-use systems in Southeast Asia. As a result, fallows and secondary forests have come to cover a large part of the region. However, they are increasingly being replaced by monocultures, such as corn cultivation. This is believed to be a main driver of deforestation and forest degradation, and one of the reasons behind the recurring winter smog crisis in Thailand and around Southeast Asia. Accurate biomass estimation of trees is important to quantify valuable carbon stocks and changes to these stocks in case of land use change. However, presently, Thailand lacks proper tools and optimal equations to quantify its carbon stocks, especially for secondary evergreen forests, including fallow areas after shifting cultivation and smaller trees with a diameter at breast height (DBH) of less than 5 cm. Developing new allometric equations to estimate biomass is urgently needed to accurately estimate and manage carbon storage in tropical secondary forests. This study established new equations using a destructive method at three study sites: approximately 50-year-old secondary forest, 4-year-old fallow, and 7-year-old fallow. Tree biomass was collected by harvesting 136 individual trees (including coppiced trees) from 23 species, with a DBH ranging from 1 to 31 cm. Oven-dried samples were sent for carbon analysis. Wood density was calculated from disk samples and samples collected with an increment borer from 79 species, including 35 species currently missing from the Global Wood Densities database. Several models were developed, showing that aboveground biomass (AGB) was strongly related to DBH, height (H), and wood density (WD). Including WD in the model was found to improve the accuracy of the AGB estimation. This study provides insights for reforestation management, and can be used to prepare baseline data for Thailand’s carbon stocks for the REDD+ and other carbon trading schemes. These may provide monetary incentives to stop illegal logging and deforestation for monoculture.Keywords: aboveground biomass, allometric equation, carbon stock, secondary forest
Procedia PDF Downloads 285804 Extracting Plowing Forces for Aluminum 6061-T6 Using a Small Number of Drilling Experiments
Authors: Ilige S. Hage, Charbel Y. Seif
Abstract:
Forces measured during cutting operations are generated by the cutting process and include parasitic forces, known as edge forces. A fraction of these measured forces arises from the tertiary cutting zone, such as flank or edge forces. Most machining models are designed for sharp tools; where edge forces represent the portion of the measured forces associated with deviations of the tool from an ideal sharp geometry. Flank forces are challenging to isolate. The most common method involves plotting the force at a constant cutting speed against uncut chip thickness and then extrapolating to zero feed. The resulting positive intercept on the vertical axis is identified as the edge or plowing force. The aim of this research is to identify the effect of tool rake angle and cutting speeds on flank forces and to develop a force model as a function of tool rake angle and cutting speed for predicting plowing forces. Edge forces were identified based on a limited number of drilling experiments using a 10 mm twist drill, where lip edge cutting forces were collected from 2.5 mm pre-cored samples. Cutting lip forces were measured with feed rates varying from 0.04 to 0.64 mm/rev and spindle speeds ranging from 796 to 9868 rpm, at a sampling rate of 200 Hz. By using real-time force measurements as the drill enters the workpiece, this study provides an economical method for analyzing the effect of tool geometry and cutting conditions on generated cutting forces, reducing the number of required experimental setups. As a result, an empirical model predicting parasitic edge forces was developed function of the cutting velocity, tool rake angle, and clearance angle along the lip of the tool, demonstrating strong agreement with edge forces reported in the literature for Aluminum 6061-T6. The model achieved an R2 value of 0.92 and a mean square error of 4%, validating the accuracy of the proposed methodology. The presented methodology leverages variations in machining parameters. This approach contrasts with traditional machining experiments, where the turning process typically serves as the basis for force measurements and each experimental setup is characterized by a single cutting velocity, tool rake angle, and clearance angle.Keywords: drilling, plowing, edge forces, cutting force, torque
Procedia PDF Downloads 3803 Effects of Intergenerational Social Mobility on General Health, Oral Health and Physical Function among Older Adults in England
Authors: Alejandra Letelier, Anja Heilmann, Richard G. Watt, Stephen Jivraj, Georgios Tsakos
Abstract:
Background: Socioeconomic position (SEP) influences adult health. People who experienced material disadvantages in childhood or adulthood tend to have higher adult disease levels than their peers from more advantaged backgrounds. Even so, life is a dynamic process and contains a series of transitions that could lead people through different socioeconomic paths. Research on social mobility takes this into account by adopting a trajectory approach, thereby providing a long-term view of the effect of SEP on health. Aim: The aim of this research examines the effects of intergenerational social mobility on adult general health, oral health and functioning in a population aged 50 and over in England. Methods: This study is based on the secondary analysis of data from the English Longitudinal Study of Ageing (ELSA). Using cross-sectional data, nine social trajectories were created based on parental and adult occupational socio-economic position. Regression models were used to estimate the associations between social trajectories and the following outcomes: adult self-rated health, self-rated oral health, oral health related quality of life, total tooth loss and grip strength; while controlling for socio-economic background and health related behaviours. Results: Associations with adult SEP were generally stronger than with childhood SEP, suggesting a stronger influence of proximal rather than distal SEP on health and oral health. Compared to the stable high group, being in the low SEP groups in childhood and adulthood was associated with poorer health and oral health for all examined outcome measures. For adult self-rated health and edentulousness, graded associations with social mobility trajectories were observed. Conclusion: Intergenerational social mobility was associated with self-rated health and total tooth loss. Compared to only those who remained in a low SEP group over time reported worse self-rated oral health and oral health related quality of life, and had lower grip strength measurements. Potential limitations in relation to data quality will be discussed.Keywords: social determinants of oral health, social mobility, socioeconomic position and oral health, older adults oral health
Procedia PDF Downloads 276802 Preventive Impact of Regional Analgesia on Chronic Neuropathic Pain After General Surgery
Authors: Beloulou Mohamed Lamine, Fedili Benamar, Meliani Walid, Chaid Dalila, Lamara Abdelhak
Abstract:
Introduction: Post-surgical chronic pain (PSCP) is a pathological condition with a rather complex etiopathogenesis that extensively involves sensitization processes and neuronal damage. The neuropathic component of these pains is almost always present, with variable expression depending on the type of surgery. Objective: To assess the presumed beneficial effect of Regional Anesthesia-Analgesia Techniques (RAAT) on the development of post-surgical chronic neuropathic pain (PSCNP) in various surgical procedures. Patients and Methods: A comparative study involving 510 patients distributed across five surgical models (mastectomy, thoracotomy, hernioplasty, cholecystectomy, and major abdominal-pelvic surgery) and randomized into two groups: Group A (240) receiving conventional postoperative analgesia and Group B (270) receiving balanced analgesia, including the implementation of a Regional Anesthesia-Analgesia Technique (RAAT). These patients were longitudinally followed over a 6-month period, with postsurgical chronic neuropathic pain (PSCNP) defined by a Neuropathic Pain Score DN2≥ 3. Comparative measurements through univariate and multivariable analyses were performed to identify associations between the development of PSCNP and certain predictive factors, including the presumed preventive impact (protective effect) of RAAT. Results: At the 6th month post-surgery, 419 patients were analyzed (Group A= 196 and Group B= 223). The incidence of PSCNP was 32.2% (n=135). Among these patients with chronic pain, the prevalence of neuropathic pain was 37.8% (95% CI: [29.6; 46.5]), with n=51/135. It was significantly lower in Group B compared to Group A, with respective percentages of 31.4% vs. 48.8% (p-value = 0.035). The most significant differences were observed in breast and thoracopulmonary surgeries. In a multiple regression analysis, two predictors of PSCNP were identified: the presence of preoperative pain at the surgical site as a risk factor (OR: 3.198; 95% CI [1.326; 7.714]) and RAAT as a protective factor (OR: 0.408; 95% CI [0.173; 0.961]). Conclusion: The neuropathic component of PSCNP can be observed in different types of surgeries. Regional analgesia included in a multimodal approach to postoperative pain management has proven to be effective for acute pain and seems to have a preventive impact on the development of PSCNP and its neuropathic nature, particularly in surgeries that are more prone to chronicization.Keywords: post-surgical chronic pain, post-surgical chronic neuropathic pain, regional anesthesia-analgesia techniques, neuropathic pain score DN2, preventive impact
Procedia PDF Downloads 78801 Heritage and the Sustainable Development Goals: Successful Practices and Lessons Learnt from the Uk’s Global Challenges Research Fund and Newton Research Portfolios
Authors: Francesca Giliberto
Abstract:
Heritage and culture, in general, plays a central role in addressing the complexity and broad variety of global development challenges, ranging from environmental degradation and refugee and humanitarian crisis to extreme poverty, food insecurity, persisting inequalities, and unsustainable urbanisation, just to mention some examples. Nevertheless, the potential of harnessing heritage to address global challenges has remained largely under-represented and underestimated in the most recent international development agenda adopted by the United Nations in 2015 (2030 Agenda). Among the 17 sustainable development goals (SDGs) and 169 associated targets established, only target 11.4 explicitly mentions heritage, stating that efforts should be strengthened “to protect and safeguard the world’s cultural and natural heritage in order to make our cities safe, resilient, and sustainable”. However, this global target continues to reflect a rather limited approach to heritage for development. This paper will provide a critical reflection on the contribution that using (tangible and intangible) heritage in international research can make to tackling global challenges and supporting the achievement of all the SDGs. It will present key findings and insights from the heritage strand of PRAXIS, a research project from the University of Leeds, which focuses on Arts and Humanities research across 300+ projects funded through the Global Challenges Research Fund and Newton Fund. In particular, this paper will shed light on successful practices and lessons learned from 87 research projects funded through the Global Challenges Research Fund and Newton Fund portfolios in 49 countries eligible for Official Development Assistance (ODA) between 2014 and 2021. Research data were collected through a desk assessment of project data available on UKRI Gateway to Research, online surveys, and qualitative interviews with research principal investigators and partners. The findings of this research provide evidence of how heritage and heritage research can foster innovative, interdisciplinary, inclusive, and transformative sustainable development and the achievement of the SDGs in ODA countries and beyond. This paper also highlights current challenges and research gaps that still need to be overcome to rethink current approaches and transform our development models to be more integrated, human-centred, and sustainable.Keywords: global challenges, heritage, international research, sustainable development
Procedia PDF Downloads 74800 A Computational Investigation of Potential Drugs for Cholesterol Regulation to Treat Alzheimer’s Disease
Authors: Marina Passero, Tianhua Zhai, Zuyi (Jacky) Huang
Abstract:
Alzheimer’s disease has become a major public health issue, as indicated by the increasing populations of Americans living with Alzheimer’s disease. After decades of extensive research in Alzheimer’s disease, only seven drugs have been approved by Food and Drug Administration (FDA) to treat Alzheimer’s disease. Five of these drugs were designed to treat the dementia symptoms, and only two drugs (i.e., Aducanumab and Lecanemab) target the progression of Alzheimer’s disease, especially the accumulation of amyloid-b plaques. However, controversial comments were raised for the accelerated approvals of either Aducanumab or Lecanemab, especially with concerns on safety and side effects of these two drugs. There is still an urgent need for further drug discovery to target the biological processes involved in the progression of Alzheimer’s disease. Excessive cholesterol has been found to accumulate in the brain of those with Alzheimer’s disease. Cholesterol can be synthesized in both the blood and the brain, but the majority of biosynthesis in the adult brain takes place in astrocytes and is then transported to the neurons via ApoE. The blood brain barrier separates cholesterol metabolism in the brain from the rest of the body. Various proteins contribute to the metabolism of cholesterol in the brain, which offer potential targets for Alzheimer’s treatment. In the astrocytes, SREBP cleavage-activating protein (SCAP) binds to Sterol Regulatory Element-binding Protein 2 (SREBP2) in order to transport the complex from the endoplasmic reticulum to the Golgi apparatus. Cholesterol is secreted out of the astrocytes by ATP-Binding Cassette A1 (ABCA1) transporter. Lipoprotein receptors such as triggering receptor expressed on myeloid cells 2 (TREM2) internalize cholesterol into the microglia, while lipoprotein receptors such as Low-density lipoprotein receptor-related protein 1 (LRP1) internalize cholesterol into the neuron. Cytochrome P450 Family 46 Subfamily A Member 1 (CYP46A1) converts excess cholesterol to 24S-hydroxycholesterol (24S-OHC). Cholesterol has been approved for its direct effect on the production of amyloid-beta and tau proteins. The addition of cholesterol to the brain promotes the activity of beta-site amyloid precursor protein cleaving enzyme 1 (BACE1), secretase, and amyloid precursor protein (APP), which all aid in amyloid-beta production. The reduction of cholesterol esters in the brain have been found to reduce phosphorylated tau levels in mice. In this work, a computational pipeline was developed to identify the protein targets involved in cholesterol regulation in brain and further to identify chemical compounds as the inhibitors of a selected protein target. Since extensive evidence shows the strong correlation between brain cholesterol regulation and Alzheimer’s disease, a detailed literature review on genes or pathways related to the brain cholesterol synthesis and regulation was first conducted in this work. An interaction network was then built for those genes so that the top gene targets were identified. The involvement of these genes in Alzheimer’s disease progression was discussed, which was followed by the investigation of existing clinical trials for those targets. A ligand-protein docking program was finally developed to screen 1.5 million chemical compounds for the selected protein target. A machine learning program was developed to evaluate and predict the binding interaction between chemical compounds and the protein target. The results from this work pave the way for further drug discovery to regulate brain cholesterol to combat Alzheimer’s disease.Keywords: Alzheimer’s disease, drug discovery, ligand-protein docking, gene-network analysis, cholesterol regulation
Procedia PDF Downloads 76