Search results for: distillation curve
165 Coffee Consumption Has No Acute Effects on Glucose Metabolism in Healthy Men: A Randomized Crossover Clinical Trial
Authors: Caio E. G. Reis, Sara Wassell, Adriana L. Porto, Angélica A. Amato, Leslie J. C. Bluck, Teresa H. M. da Costa
Abstract:
Background: Multiple epidemiologic studies have consistently reported association between increased coffee consumption and a lowered risk of Type 2 Diabetes Mellitus. However, the mechanisms behind this finding have not been fully elucidated. Objective: We investigate the effect of coffee (caffeinated and decaffeinated) on glucose effectiveness and insulin sensitivity using the stable isotope minimal model protocol with oral glucose administration in healthy men. Design: Fifteen healthy men underwent 5 arms randomized crossover single-blinding (researchers) clinical trial. They consumed decaffeinated coffee, caffeinated coffee (with and without sugar), and controls – water (with and without sugar) followed 1 hour by an oral glucose tolerance test (75 g of available carbohydrate) with intravenous labeled dosing interpreted by the two compartment minimal model (225 minutes). One-way ANOVA with Bonferroni adjustment were used to compare the effects of the tested beverages on glucose metabolism parameters. Results: Decaffeinated coffee resulted in 29% and 85% higher insulin sensitivity compared with caffeinated coffee and water, respectively, and the caffeinated coffee showed 15% and 60% higher glucose effectiveness compared with decaffeinated coffee and water, respectively. However, these differences were not significant (p > 0.10). In overall analyze (0 – 225 min) there were no significant differences on glucose effectiveness, insulin sensitivity, and glucose and insulin area under the curve between the groups. The beneficial effects of coffee did not seem to act in the short-term (hours) on glucose metabolism parameters mainly on insulin sensitivity indices. The benefits of coffee consumption occur in the long-term (years) as has been shown in the reduction of Type 2 Diabetes Mellitus risk in epidemiological studies. The clinical relevance of the present findings is that there is no need to avoid coffee as the drink choice for healthy people. Conclusions: The findings of this study demonstrate that the consumption of caffeinated and decaffeinated coffee with or without sugar has no acute effects on glucose metabolism in healthy men. Further researches, including long-term interventional studies, are needed to fully elucidate the mechanisms behind the coffee effects on reduced risk for Type 2 Diabetes Mellitus.Keywords: coffee, diabetes mellitus type 2, glucose, insulin
Procedia PDF Downloads 436164 Enhanced CNN for Rice Leaf Disease Classification in Mobile Applications
Authors: Kayne Uriel K. Rodrigo, Jerriane Hillary Heart S. Marcial, Samuel C. Brillo
Abstract:
Rice leaf diseases significantly impact yield production in rice-dependent countries, affecting their agricultural sectors. As part of precision agriculture, early and accurate detection of these diseases is crucial for effective mitigation practices and minimizing crop losses. Hence, this study proposes an enhancement to the Convolutional Neural Network (CNN), a widely-used method for Rice Leaf Disease Image Classification, by incorporating MobileViTV2—a recently advanced architecture that combines CNN and Vision Transformer models while maintaining fewer parameters, making it suitable for broader deployment on edge devices. Our methodology utilizes a publicly available rice disease image dataset from Kaggle, which was validated by a university structural biologist following the guidelines provided by the Philippine Rice Institute (PhilRice). Modifications to the dataset include renaming certain disease categories and augmenting the rice leaf image data through rotation, scaling, and flipping. The enhanced dataset was then used to train the MobileViTV2 model using the Timm library. The results of our approach are as follows: the model achieved notable performance, with 98% accuracy in both training and validation, 6% training and validation loss, and a Receiver Operating Characteristic (ROC) curve ranging from 95% to 100% for each label. Additionally, the F1 score was 97%. These metrics demonstrate a significant improvement compared to a conventional CNN-based approach, which, in a previous 2022 study, achieved only 78% accuracy after using 5 convolutional layers and 2 dense layers. Thus, it can be concluded that MobileViTV2, with its fewer parameters, outperforms traditional CNN models, particularly when applied to Rice Leaf Disease Image Identification. For future work, we recommend extending this model to include datasets validated by international rice experts and broadening the scope to accommodate biotic factors such as rice pest classification, as well as abiotic stressors such as climate, soil quality, and geographic information, which could improve the accuracy of disease prediction.Keywords: convolutional neural network, MobileViTV2, rice leaf disease, precision agriculture, image classification, vision transformer
Procedia PDF Downloads 24163 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms
Authors: Habtamu Ayenew Asegie
Abstract:
Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction
Procedia PDF Downloads 39162 Leaf Epidermal Micromorphology as Identification Features in Accessions of Sesamum indicum L. Collected from Northern Nigeria
Authors: S. D. Abdul, F. B. J. Sawa, D. Z. Andrawus, G. Dan'ilu
Abstract:
Fresh leaves of twelve accessions of S. indicum were studied to examine their stomatal features, trichomes, epidermal cell shapes and anticlinal cell-wall patterns which may be used for the delimitation of the varieties. The twelve accessions of S. indicum studied have amphistomatic leaves, i.e. having stomata on both surfaces. Four types of stomatal complex types were observed namely, diacytic, anisocytic, tetracytic and anomocytic. Anisocytic type was the most common occurring on both surfaces of all the varieties and occurred 100% in varieties lale-duk, ex-sudan and ex-gombe 6. One-way ANOVA revealed that there was no significant difference between the stomatal densities of ex-gombe 6, ex-sudan, adawa-wula, adawa-ting, ex-gombe 4 and ex-gombe 2 . Accession adawa-ting (improved) has the smallest stomatal size (26.39µm) with highest stomatal density (79.08mm2) while variety adawa-wula possessed the largest stomatal size (74.31µm) with lowest stomatal density (29.60mm2), the exception was found in variety adawa-ting whose stomatal size is larger (64.03µm) but with higher stomatal density (71.54mm2). Wavy, curve or undulate anticlinal wall patterns with irregular and or isodiametric epidermal cell shapes were observed. These accessions were found to exhibit high degree of heterogeneity in their trichome features. Ten types of trichomes were observed: unicellular, glandular peltate, capitate glandular, long unbranched uniseriate, short unbranched uniseriate, scale, multicellular, multiseriate capitate glandular, branched uniseriate and stallate trichomes. The most frequent trichome type is short-unbranched uniseriate, followed by long-unbranched uniseriate (72.73% and 72.5%) respectively. The least frequent was multiseriate capitate glandular (11.5%). The high variation in trichome types and density coupled with the stomatal complex types suggest that these varieties of S. indicum probably have the capacity to conserve water. Furthermore, the leaf micromorphological features varied from one accession to another, hence, are found to be good diagnostic and additional tool in identification as well as nomenclature of the accessions of S. indicum.Keywords: Sesamum indicum, stomata, trichomes, epidermal cells, taxonomy
Procedia PDF Downloads 274161 Assessment of Incidence and Predictors of Mortality Among HIV Positive Children on Art in Public Hospitals of Harer Town Who Were Enrolled From 2011 to 2021
Authors: Getahun Nigusie Demise
Abstract:
Background; antiretroviral treatment reduce HIV-related morbidity, and prolonged survival of patients however, there is lack of up-to-date information concerning the treatment long term effect on the survival of HIV positive children especially in the study area. Objective: The aim of this study is to assess the incidence and predictors of mortality among HIV positive children on antiretroviral therapy (ART) in public hospitals of Harer town who were enrolled from 2011 to 2021. Methodology: Institution based retrospective cohort study was conducted among 429 HIV positive children enrolled in ART clinic from January 1st 2011 to December30th 2021. Data were collected from medical cards by using a data extraction form, Descriptive analyses were used to Summarized the results, and life table was used to estimate survival probability at specific point of time after introduction of ART. Kaplan Meier survival curve together with log rank test was used to compare survival between different categories of covariates, and Multivariate Cox-proportional hazard regression model was used to estimate adjusted Hazard rate. Variables with p-values ≤0.25 in bivariable analysis were candidates to the multivariable analysis. Finally, variables with p-values < 0.05 were considered as significant variables. Results: The study participants had followed for a total of 2549.6 child-years (30596 child months) with an overall mortality rate of 1.5 (95% CI: 1.1, 2.04) per 100 child-years. Their median survival time was 112 months (95% CI: 101–117). There were 38 children with unknown outcome, 39 deaths, and 55 children transfer out to different facility. The overall survival at 6, 12, 24, 48 months were 98%, 96%, 95%, 94% respectively. being in WHO clinical Stage four (AHR=4.55, 95% CI:1.36, 15.24), having anemia(AHR=2.56, 95% CI:1.11, 5.93), baseline low absolute CD4 count (AHR=2.95, 95% CI: 1.22, 7.12), stunting (AHR=4.1, 95% CI: 1.11, 15.42), wasting (AHR=4.93, 95% CI: 1.31, 18.76), poor adherence to treatment (AHR=3.37, 95% CI: 1.25, 9.11), having TB infection at enrollment (AHR=3.26, 95% CI: 1.25, 8.49),and no history of change their regimen(AHR=7.1, 95% CI: 2.74, 18.24), were independent predictors of death. Conclusion: more than half of death occurs within 2 years. Prevalent tuberculosis, anemia, wasting, and stunting nutritional status, socioeconomic factors, and baseline opportunistic infection were independent predictors of death. Increasing early screening and managing those predictors are required.Keywords: human immunodeficiency virus-positive children, anti-retroviral therapy, survival, treatment, Ethiopia
Procedia PDF Downloads 49160 Towards Sustainable Concrete: Maturity Method to Evaluate the Effect of Curing Conditions on the Strength Development in Concrete Structures under Kuwait Environmental Conditions
Authors: F. Al-Fahad, J. Chakkamalayath, A. Al-Aibani
Abstract:
Conventional methods of determination of concrete strength under controlled laboratory conditions will not accurately represent the actual strength of concrete developed under site curing conditions. This difference in strength measurement will be more in the extreme environment in Kuwait as it is characterized by hot marine environment with normal temperature in summer exceeding 50°C accompanied by dry wind in desert areas and salt laden wind on marine and on shore areas. Therefore, it is required to have test methods to measure the in-place properties of concrete for quality assurance and for the development of durable concrete structures. The maturity method, which defines the strength of a given concrete mix as a function of its age and temperature history, is an approach for quality control for the production of sustainable and durable concrete structures. The unique harsh environmental conditions in Kuwait make it impractical to adopt experiences and empirical equations developed from the maturity methods in other countries. Concrete curing, especially in the early age plays an important role in developing and improving the strength of the structure. This paper investigates the use of maturity method to assess the effectiveness of three different types of curing methods on the compressive and flexural strength development of one high strength concrete mix of 60 MPa produced with silica fume. This maturity approach was used to predict accurately, the concrete compressive and flexural strength at later ages under different curing conditions. Maturity curves were developed for compressive and flexure strengths for a commonly used concrete mix in Kuwait, which was cured using three different curing conditions, including water curing, external spray coating and the use of internal curing compound during concrete mixing. It was observed that the maturity curve developed for the same mix depends on the type of curing conditions. It can be used to predict the concrete strength under different exposure and curing conditions. This study showed that concrete curing with external spray curing method cannot be recommended to use as it failed to aid concrete in reaching accepted values of strength, especially for flexural strength. Using internal curing compound lead to accepted levels of strength when compared with water cuing. Utilization of the developed maturity curves will help contactors and engineers to determine the in-place concrete strength at any time, and under different curing conditions. This will help in deciding the appropriate time to remove the formwork. The reduction in construction time and cost has positive impacts towards sustainable construction.Keywords: curing, durability, maturity, strength
Procedia PDF Downloads 304159 Descriptive Epidemiology of Diphtheria Outbreak Data, Taraba State, Nigeria, August-November 2023
Authors: Folajimi Oladimeji Shorunke
Abstract:
Background: As of October 9, 2023, diphtheria has been noted to be re-emerging in four African countries: Algeria, Guinea, Niger, and Nigeria. 14,587 cases with a case fatality rate of 4.1% have been reported across these regions, with Nigeria alone responsible for over 90% of the cases. In Taraba State Nigeria, the index case of Diphtheria was reported on epidemic week 34, August 24, 2023 with 75 confirmed cases found 3 months after the index case and a case fatality of 1.3%. it described the distribution, trend and common symptoms found during the Outbreak. Methods: The Taraba State Diphtheria Outbreak line list on the Surveillance Outbreak Response Management & Analysis System (SORMAS) for all its 16 local government areas (LGAs) was analyzed using descriptive statistics (graphs, chats and maps) for the period between 24th August to 25th November 2023. Primary data was collected through the use of case investigation forms and variables like Age, gender, date of disease onset, LGA of residence, and symptoms exhibited were collected. Naso-pharyngeal and oro-pharyngeal samples were also collected for Laboratory confirmation. The most common diphtheria symptoms during the outbreak were also highlighted. Results: A total of 75 Diphtheria cases were diagnosed in 10 of the 16 LGAs in Taraba State between 24th August to 25th November 2023, 72% of the cases were female, with the age range 0-9 years having the highest proportion of 34 (45.3%), the number of positive diagnosis reduces with age among cases. The Northern part of the State had the highest proportion of cases, 68 (90.7%), with Ardo-Kola LGA having the highest 28 (29%). The remaining 9.2% of cases is shared among the middle belt and southern part of the State. The Epi-curve took the characteristic shape of a propagated infection with peaks at the 37th, 39th and 45th epidemic weeks. The most common symptoms found in cases were fever 71 (94.7%), pharyngitis 65( 86.7%), tonsillitis 60 (80%), and laryngitis 53 (71%). Conclusions: The number of confirmed cases of Diphtheria in Taraba State, Nigeria between 24th August to 25th November 2023 is 75. The condition is higher among females than male and mostly affected children between ages 0-9 with the northern part of the state most affected. The most common symptoms exhibited by cases include fever, pharyngitis, tonsillitis and laryngitis.Keywords: diphtheria outbreak, taraba nigeria, descriptive epidemiology, trend
Procedia PDF Downloads 69158 QSAR Study on Diverse Compounds for Effects on Thermal Stability of a Monoclonal Antibody
Authors: Olubukayo-Opeyemi Oyetayo, Oscar Mendez-Lucio, Andreas Bender, Hans Kiefer
Abstract:
The thermal melting curve of a protein provides information on its conformational stability and could provide cues on its aggregation behavior. Naturally-occurring osmolytes have been shown to improve the thermal stability of most proteins in a concentration-dependent manner. They are therefore commonly employed as additives in therapeutic protein purification and formulation. A number of intertwined and seemingly conflicting mechanisms have been put forward to explain the observed stabilizing effects, the most prominent being the preferential exclusion mechanism. We attempted to probe and summarize molecular mechanisms for thermal stabilization of a monoclonal antibody (mAb) by developing quantitative structure-activity relationships using a rationally-selected library of 120 osmolyte-like compounds in the polyhydric alcohols, amino acids and methylamines classes. Thermal stabilization potencies were experimentally determined by thermal shift assays based on differential scanning fluorimetry. The cross-validated QSAR model was developed by partial least squares regression using descriptors generated from Molecular Operating Environment software. Careful evaluation of the results with the use of variable importance in projection parameter (VIP) and regression coefficients guided the selection of the most relevant descriptors influencing mAb thermal stability. For the mAb studied and at pH 7, the thermal stabilization effects of tested compounds correlated positively with their fractional polar surface area and inversely with their fractional hydrophobic surface area. We cannot claim that the observed trends are universal for osmolyte-protein interactions because of protein-specific effects, however this approach should guide the quick selection of (de)stabilizing compounds for a protein from a chemical library. Further work with a large variety of proteins and at different pH values would help the derivation of a solid explanation as to the nature of favorable osmolyte-protein interactions for improved thermal stability. This approach may be beneficial in the design of novel protein stabilizers with optimal property values, especially when the influence of solution conditions like the pH and buffer species and the protein properties are factored in.Keywords: thermal stability, monoclonal antibodies, quantitative structure-activity relationships, osmolytes
Procedia PDF Downloads 331157 Cover Layer Evaluation in Soil Organic Matter of Mixing and Compressed Unsaturated
Authors: Nayara Torres B. Acioli, José Fernando T. Jucá
Abstract:
The uncontrolled emission of gases in urban residues' embankment located near urban areas is a social and environmental problem, common in Brazilian cities. Several environmental impacts in the local and global scope may be generated by atmospheric air contamination by the biogas resulted from the decomposition of solid urban materials. In Brazil, the cities of small size figure mostly with 90% of all cities, with the population smaller than 50,000 inhabitants, according to the 2011 IBGE' census, most of the landfill covering layer is composed of clayey, pure soil. The embankments undertaken with pure soil may reach up to 60% of retention of methane, for the other 40% it may be dispersed into the atmosphere. In face of this figures the oxidative covering layer is granted some space of study, envisaging to reduce this perceptual available in the atmosphere, releasing, in spite of methane, carbonic gas which is almost 20 times as less polluting than Methane. This paper exposes the results of studies on the characteristics of the soil used for the oxidative coverage layer of the experimental embankment of Solid Urban Residues (SUR), built in Muribeca-PE, Brazil, supported of the Group of Solid Residues (GSR), located at Federal University of Pernambuco, through laboratory vacuum experiments (determining the characteristics curve), granularity, and permeability, that in soil with saturation over 85% offers dramatic drops in the test of permeability to the air, by little increments of water, based in the existing Brazilian norm for this procedure. The suction was studied, as in the other tests, from the division of prospection of an oxidative coverage layer of 60cm, in the upper half (0.1 m to 0.3 m) and lower half (0.4 m to 0.6 m). Therefore, the consequences to be presented from the lixiviation of the fine materials after 5 years of finalization of the embankment, what made its permeability increase. Concerning its humidity, it is most retained in the upper part, that comprises the compound, with a difference in the order of 8 percent the superior half to inferior half, retaining the least suction from the surface. These results reveal the efficiency of the oxidative coverage layer in retaining the rain water, it has a lower cost when compared to the other types of layer, offering larger availability of this layer as an alternative for a solution for the appropriate disposal of residues.Keywords: oxidative coverage layer, permeability, suction, saturation
Procedia PDF Downloads 289156 Agronomic Test to Determine the Efficiency of Hydrothermally Treated Alkaline Igneous Rocks and Their Potassium Fertilizing Capacity
Authors: Aaron Herve Mbwe Mbissik, Lotfi Khiari, Otmane Raji, Abdellatif Elghali, Abdelkarim Lajili, Muhammad Ouabid, Martin Jemo, Jean-Louis Bodinier
Abstract:
Potassium (K) is an essential macronutrient for plant growth, helping to regulate several physiological and metabolic processes. Evaporite-related potash salts, mainly sylvite minerals (K chloride or KCl), are the principal source of K for the fertilizer industry. However, due to the high potash-supply risk associated with its considerable price fluctuations and uneven geographic distribution for most agriculture-based developing countries, the development of alternative sources of fertilizer K is imperative to maintain adequate crop yield, reduce yield gaps, and food security. Alkaline Igneous rocks containing significant K-rich silicate minerals such as K feldspar are increasingly seen as the best alternative available. However, these rocks may require to be hydrothermally treatment to enhance the release of potassium. In this study, we evaluate the fertilizing capacity of raw and hydrothermally treated K-bearing silicate rocks from different areas in Morocco. The effectiveness of rock powders was tested in a greenhouse experiment using ryegrass (Lolium multiflorum) by comparing them to a control (no K added) and to a conventional fertilizer (muriate of potash: MOP or KCl). The trial was conducted in a randomized complete block design with three replications, and plants were grown on K-depleted soils for three growing cycles. To achieve our objective, in addition to the analysis of the muriate response curve and the different biomasses, we also examined three necessary coefficients, namely: the K uptake, then apparent K recovery (AKR), and the relative K efficiency (RKE). The results showed that based on the optimum economic rate of MOP (230 kg.K.ha⁻¹) and the optimum yield (44 000 kg.K.ha⁻¹), the efficiency of K silicate rocks was as high as that of MOP. Although the plants took up only half of the K supplied by the powdered rock, the hydrothermal material was found to be satisfactory, with a biomass value reaching the optimum economic limit until the second crop cycle. In comparison, the AKR of the MOP (98.6%) and its RKE in the 1st cycle were higher than our materials: 39% and 38%, respectively. Therefore, the raw and hydrothermal materials mixture could be an appropriate solution for long-term agronomic use based on the obtained results.Keywords: K-uptake, AKR, RKE, K-bearing silicate rock, MOP
Procedia PDF Downloads 90155 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis
Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork
Abstract:
The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia
Procedia PDF Downloads 120154 Understanding the Relationship between Community and the Preservation of Cultural Landscape - Focusing on Organically Evolved Landscapes
Authors: Adhithy Menon E., Biju C. A.
Abstract:
Heritage monuments were first introduced to the public in the 1960s when the concept of preserving them was introduced. As a result of the 1990s, the concept of cultural landscapes gained importance, emphasizing the importance of culture and heritage in the context of the landscape. It is important to note that this paper is primarily concerned with the second category of ecological landscapes, which is organically evolving landscapes, as they represent a complex network of tangible, intangible, and environment, and the connections they share with the communities in which they are situated. The United Nations Educational, Scientific, and Cultural Organization has identified 39 cultural sites as being in danger, including the Iranian city of Bam and the historic city of Zabid in Yemen. To ensure its protection in the future, it is necessary to conduct a detailed analysis of the factors contributing to this degradation. An analysis of selected cultural landscapes from around the world is conducted to determine which parameters cause their degradation. The paper follows the objectives of understanding cultural landscapes and their importance for development, followed by examining various criteria for identifying cultural landscapes, their various classifications, as well as agencies that focus on their protection. To identify and analyze the parameters contributing to the deterioration of cultural landscapes based on literature and case studies (cultural landscape of Sintra, Rio de Janeiro, and Varanasi). As a final step, strategies should be developed to enhance deteriorating cultural landscapes based on these parameters. The major findings of the study are the impact of community in the parameters derived - integrity (natural factors, natural disasters, demolition of structures, deterioration of materials), authenticity (living elements, sense of place, building techniques, religious context, artistic expression) public participation (revenue, dependence on locale), awareness (demolition of structures, resource management) disaster management, environmental impact, maintenance of cultural landscape (linkages with other sites, dependence on locale, revenue, resource management). The parameters of authenticity, public participation, awareness, and maintenance of the cultural landscape are directly related to the community in which the cultural landscape is located. Therefore, by focusing on the community and addressing the parameters identified, the deterioration curve of cultural landscapes can be altered.Keywords: community, cultural landscapes, heritage, organically evolved, public participation
Procedia PDF Downloads 87153 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback
Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu
Abstract:
With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.Keywords: input performance, mobile device, slim keyboard, tactile feedback
Procedia PDF Downloads 299152 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams
Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin
Abstract:
Fire incidents have been steadily increased over the last year according to national emergency management agency of South Korea. Even though most of the fire incidents with property damage have been occurred in building, rehabilitation has not been properly done with consideration of structure safety. Therefore, this study aims at evaluating rehabilitation effects on fire damaged normal strength concrete beams through experiments and finite element analyses. For the experiments, reinforced concrete beams were fabricated having designed concrete strength of 21 MPa. Two different cover thicknesses were used as 40 mm and 50 mm. After cured, the fabricated beams were heated for 1hour or 2hours according to ISO-834 standard time-temperature curve. Rehabilitation was done by removing the damaged part of cover thickness and filling polymeric mortar into the removed part. Both fire damaged beams and rehabilitated beams were tested with four point loading system to observe structural behaviors and the rehabilitation effect. To verify the experiment, finite element (FE) models for structural analysis were generated using commercial software ABAQUS 6.10-3. For the rehabilitated beam models, integrated temperature-structural analyses were performed in advance to obtain geometries of the fire damaged beams. In addition to the fire damaged beam models, rehabilitated part was added with material properties of polymeric mortar. Three dimensional continuum brick elements were used for both temperature and structural analyses. The same loading and boundary conditions as experiments were implemented to the rehabilitated beam models and non-linear geometrical analyses were performed. Test results showed that maximum loads of the rehabilitated beams were 8~10% higher than those of the non-rehabilitated beams and even 1~6 % higher than those of the non-fire damaged beam. Stiffness of the rehabilitated beams were also larger than that of non-rehabilitated beams but smaller than that of the non-fire damaged beams. In addition, predicted structural behaviors from the analyses also showed good rehabilitation effect and the predicted load-deflection curves were similar to the experimental results. From this study, both experiments and analytical results demonstrated good rehabilitation effect on the fire damaged normal strength concrete beams. For the further, the proposed analytical method can be used to predict structural behaviors of rehabilitated and fire damaged concrete beams accurately without suffering from time and cost consuming experimental process.Keywords: fire, normal strength concrete, rehabilitation, reinforced concrete beam
Procedia PDF Downloads 508151 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture
Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani
Abstract:
The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated. Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.Keywords: carbon capture and storage, oxy-combustion, netpower cycle, oxy turbine cycles, zero emission, heat exchanger design, supercritical carbon dioxide, oxy-fuel power plant, pinch point analysis
Procedia PDF Downloads 204150 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 363149 Estimation of Ribb Dam Catchment Sediment Yield and Reservoir Effective Life Using Soil and Water Assessment Tool Model and Empirical Methods
Authors: Getalem E. Haylia
Abstract:
The Ribb dam is one of the irrigation projects in the Upper Blue Nile basin, Ethiopia, to irrigate the Fogera plain. Reservoir sedimentation is a major problem because it reduces the useful reservoir capacity by the accumulation of sediments coming from the watersheds. Estimates of sediment yield are needed for studies of reservoir sedimentation and planning of soil and water conservation measures. The objective of this study was to simulate the Ribb dam catchment sediment yield using SWAT model and to estimate Ribb reservoir effective life according to trap efficiency methods. The Ribb dam catchment is found in North Western part of Ethiopia highlands, and it belongs to the upper Blue Nile and Lake Tana basins. Soil and Water Assessment Tool (SWAT) was selected to simulate flow and sediment yield in the Ribb dam catchment. The model sensitivity, calibration, and validation analysis at Ambo Bahir site were performed with Sequential Uncertainty Fitting (SUFI-2). The flow data at this site was obtained by transforming the Lower Ribb gauge station (2002-2013) flow data using Area Ratio Method. The sediment load was derived based on the sediment concentration yield curve of Ambo site. Stream flow results showed that the Nash-Sutcliffe efficiency coefficient (NSE) was 0.81 and the coefficient of determination (R²) was 0.86 in calibration period (2004-2010) and, 0.74 and 0.77 in validation period (2011-2013), respectively. Using the same periods, the NS and R² for the sediment load calibration were 0.85 and 0.79 and, for the validation, it became 0.83 and 0.78, respectively. The simulated average daily flow rate and sediment yield generated from Ribb dam watershed were 3.38 m³/s and 1772.96 tons/km²/yr, respectively. The effective life of Ribb reservoir was estimated using the developed empirical methods of the Brune (1953), Churchill (1948) and Brown (1958) methods and found to be 30, 38 and 29 years respectively. To conclude, massive sediment comes from the steep slope agricultural areas, and approximately 98-100% of this incoming annual sediment loads have been trapped by the Ribb reservoir. In Ribb catchment, as well as reservoir systematic and thorough consideration of technical, social, environmental, and catchment managements and practices should be made to lengthen the useful life of Ribb reservoir.Keywords: catchment, reservoir effective life, reservoir sedimentation, Ribb, sediment yield, SWAT model
Procedia PDF Downloads 187148 Acute Effects of Exogenous Hormone Treatments on Postprandial Acylation Stimulating Protein Levels in Ovariectomized Rats After a Fat Load
Authors: Bashair Al Riyami
Abstract:
Background: Acylation stimulating protein (ASP) is a small basic protein that was isolated based on its function as a potent lipogenic factor. The role of ASP in lipid metabolism has been described in numerous studies. Several association studies suggest that ASP may play a prominent role in female fat metabolism and distribution. Progesterone is established as a female lipogenic hormone, however the mechanisms by which progesterone exert its effects are not fully understood. AIM: Since ASP is an established potent lipogenic factor with a known mechanism of action, in this study we aim to investigate acute effects of different hormone treatments on ASP levels in vivo after a fat load. Methods: This is a longitudinal study including 24 female wister rats that were randomly divided into 4 groups including controls (n=6). The rats were ovariectomized, and fourteen days later the fasting rats were injected subcutaneously with a single dose of different hormone treatments (progesterone, estrogen and testosterone). An hour later, olive was administered by oral gavage, and plasma blood samples were collected at several time points after oil administration for ASP and triglyceride measurements. Area under the curve (TG-AUC) was calculated to represent TG clearance Results: RM-ANCOVA and post-analysis showed that only the progesterone treated group had a significant postprandial ASP increase at two hours compared to basal levels and to the controls (439.8± 62.4 vs 253.45± 59.03 ug/ml), P= 0.04. Interestingly, increased postprandial ASP levels coordinated negatively with corresponding TG levels and TG-AUC across the postprandial period most apparent in the progesterone and testosterone treated groups that behaved in an opposite manner. ASP levels were 3-fold higher in the progesterone compared to the testosterone treated group, whereas TG-AUC was significantly lower in the progesterone treated group compared to the testosterone treated group. Conclusion: These findings suggest that progesterone treatment enhances ASP production and TG clearance in a simultaneous manner. The strong association of postprandial ASP levels and TG clearance in the progesterone treated group support the notion of a stimulatory role for progesterone on ASP mediated TG clearance. This is the first functional study to demonstrate a cause-effect relationship between hormone treatment and ASP levels in vivo. These findings are promising and may contribute to further understanding the mechanism of progesterone function as a female lipogenic hormone through enhancing ASP production and plasma levels.Keywords: ASP, lipids, sex hormones, wister rats
Procedia PDF Downloads 342147 Statistical Design of Central Point for Evaluate the Combination of PH and Cinnamon Essential Oil on the Antioxidant Activity Using the ABTS Technique
Authors: H. Minor-Pérez, A. M. Mota-Silva, S. Ortiz-Barrios
Abstract:
Substances of vegetable origin with antioxidant capacity have a high potential for application on the conservation of some foods, can prevent or reduce for example oxidation of lipids. However a food is a complex system whose wide variety of components wich can reduce or eliminate this antioxidant capacity. The antioxidant activity can be determined with the ABTS technique. The radical ABTS+ is generated from the acid 2, 2´ - Azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS). This radical is a composite color bluish-green, stable and with a spectrum of absorption into the UV-visible. The addition of antioxidants causes discoloration, value that can be reported as a percentage of inhibition of the cation radical ABTS+. The objective of this study was evaluated the effect of the combination of the pH and the essential oil of cinnamon (EOC) on inhibition of the radical ABTS+, using statistical design of central point (Design Expert) to obtain mathematical models that describe this phenomenon. Were evaluated 17 treatments with combinations of pH 5, 6 and 7 (citrate-phosphate buffer) and the concentration of essential oil of cinnamon (C): 0 µg/mL, 100 µg/mL and 200 µg/mL. The samples were analyzed using the ABTS technique. The reagent was dissolved in methanol 80% to standardized the absorbance to 0.7 +/- 0.1 at 754 nm. Then samples were mixed with reagent standardized ABTS and after 1 min and 7 min absorbance was read for each treatment at 754 nm. Was used a curve pattern with vitamin C and reported the values as inhibition (%) of radical ABTS+. The statistical analysis shows the experimental results were adjusted to a quadratic model, to the times of 1 min and 7 min. This model describes the influence of the factors investigated independently: pH and cinnamon essential oil (µg/mL) and the effect of the interaction between pH*C, as well as the square of the pH2 and C2. The model obtained was Y = 10.33684 - 3.98118*pH + 1.17031*C + 0.62745*pH2 - 3.26675*10-3*C2 - 0.013112*pH*C, where Y is the response variable. The coefficient of determination was 0.9949 for 1 min. The equation was obtained at 7 min and = - 10.89710 + 1.52341*pH + 1.32892*C + 0.47953*pH2 - 3.56605*10- *C2 - 0.034687*pH*C. The coefficient of determination was 0.9970. This means that only 1% of the total variation is not explained by the developed models. At 100 µg/mL of EOC was obtained an inhibition percentage of 80%, 84% and 97% for the pH values of 5,6 and 7 respectively, while a value of 200 µg/mL the inhibition (%) was very similar for the treatments. In these values of pH was obtained an inhibition close 97%. In conclusion the pH does not have a significant effect on the antioxidant capacity, while the concentration of EOC was decisive for the antioxidant capacity. The authors acknowledge the funding provided by the CONACYT for the project 131998.Keywords: antioxidant activity, ABTS technique, essential oil of cinnamon, mathematical models
Procedia PDF Downloads 401146 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River
Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko
Abstract:
Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling
Procedia PDF Downloads 261145 Demarcating Wetting States in Pressure-Driven Flows by Poiseuille Number
Authors: Anvesh Gaddam, Amit Agrawal, Suhas Joshi, Mark Thompson
Abstract:
An increase in surface area to volume ratio with a decrease in characteristic length scale, leads to a rapid increase in pressure drop across the microchannel. Texturing the microchannel surfaces reduce the effective surface area, thereby decreasing the pressured drop. Surface texturing introduces two wetting states: a metastable Cassie-Baxter state and stable Wenzel state. Predicting wetting transition in textured microchannels is essential for identifying optimal parameters leading to maximum drag reduction. Optical methods allow visualization only in confined areas, therefore, obtaining whole-field information on wetting transition is challenging. In this work, we propose a non-invasive method to capture wetting transitions in textured microchannels under flow conditions. To this end, we tracked the behavior of the Poiseuille number Po = f.Re, (with f the friction factor and Re the Reynolds number), for a range of flow rates (5 < Re < 50), and different wetting states were qualitatively demarcated by observing the inflection points in the f.Re curve. Microchannels with both longitudinal and transverse ribs with a fixed gas fraction (δ, a ratio of shear-free area to total area) and at a different confinement ratios (ε, a ratio of rib height to channel height) were fabricated. The measured pressure drop values for all the flow rates across the textured microchannels were converted into Poiseuille number. Transient behavior of the pressure drop across the textured microchannels revealed the collapse of liquid-gas interface into the gas cavities. Three wetting states were observed at ε = 0.65 for both longitudinal and transverse ribs, whereas, an early transition occurred at Re ~ 35 for longitudinal ribs at ε = 0.5, due to spontaneous flooding of the gas cavities as the liquid-gas interface ruptured at the inlet. In addition, the pressure drop in the Wenzel state was found to be less than the Cassie-Baxter state. Three-dimensional numerical simulations confirmed the initiation of the completely wetted Wenzel state in the textured microchannels. Furthermore, laser confocal microscopy was employed to identify the location of the liquid-gas interface in the Cassie-Baxter state. In conclusion, the present method can overcome the limitations posed by existing techniques, to conveniently capture wetting transition in textured microchannels.Keywords: drag reduction, Poiseuille number, textured surfaces, wetting transition
Procedia PDF Downloads 161144 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation
Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber
Abstract:
Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.Keywords: indoor power line, fault location, fault map trace, series arc fault
Procedia PDF Downloads 137143 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task
Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli
Abstract:
Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making
Procedia PDF Downloads 252142 Preliminary Seismic Vulnerability Assessment of Existing Historic Masonry Building in Pristina, Kosovo
Authors: Florim Grajcevci, Flamur Grajcevci, Fatos Tahiri, Hamdi Kurteshi
Abstract:
The territory of Kosova is actually included in one of the most seismic-prone regions in Europe. Therefore, the earthquakes are not so rare in Kosova; and when they occurred, the consequences have been rather destructive. The importance of assessing the seismic resistance of existing masonry structures has drawn strong and growing interest in the recent years. Engineering included those of Vulnerability, Loss of Buildings and Risk assessment, are also of a particular interest. This is due to the fact that this rapidly developing field is related to great impact of earthquakes on the socioeconomic life in seismic-prone areas, as Kosova and Prishtina are, too. Such work paper for Prishtina city may serve as a real basis for possible interventions in historic buildings as are museums, mosques, old residential buildings, in order to adequately strengthen and/or repair them, by reducing the seismic risk within acceptable limits. The procedures of the vulnerability assessment of building structures have concentrated on structural system, capacity, and the shape of layout and response parameters. These parameters will provide expected performance of the very important existing building structures on the vulnerability and the overall behavior during the earthquake excitations. The structural systems of existing historical buildings in Pristina, Kosovo, are dominantly unreinforced brick or stone masonry with very high risk potential from the expected earthquakes in the region. Therefore, statistical analysis based on the observed damage-deformation, cracks, deflections and critical building elements, would provide more reliable and accurate results for the regional assessments. The analytical technique was used to develop a preliminary evaluation methodology for assessing seismic vulnerability of the respective structures. One of the main objectives is also to identify the buildings that are highly vulnerable to damage caused from inadequate seismic performance-response. Hence, the damage scores obtained from the derived vulnerability functions will be used to categorize the evaluated buildings as “stabile”, “intermediate”, and “unstable”. The vulnerability functions are generated based on the basic damage inducing parameters, namely number of stories (S), lateral stiffness (LS), capacity curve of total building structure (CCBS), interstory drift (IS) and overhang ratio (OR).Keywords: vulnerability, ductility, seismic microzone, ductility, energy efficiency
Procedia PDF Downloads 407141 Comparative Study of Equivalent Linear and Non-Linear Ground Response Analysis for Rapar District of Kutch, India
Authors: Kulin Dave, Kapil Mohan
Abstract:
Earthquakes are considered to be the most destructive rapid-onset disasters human beings are exposed to. The amount of loss it brings in is sufficient to take careful considerations for designing of structures and facilities. Seismic Hazard Analysis is one such tool which can be used for earthquake resistant design. Ground Response Analysis is one of the most crucial and decisive steps for seismic hazard analysis. Rapar district of Kutch, Gujarat falls in Zone 5 of earthquake zone map of India and thus has high seismicity because of which it is selected for analysis. In total 8 bore-log data were studied at different locations in and around Rapar district. Different soil engineering properties were analyzed and relevant empirical correlations were used to calculate maximum shear modulus (Gmax) and shear wave velocity (Vs) for the soil layers. The soil was modeled using Pressure-Dependent Modified Kodner Zelasko (MKZ) model and the reference curve used for fitting was Seed and Idriss (1970) for sand and Darendeli (2001) for clay. Both Equivalent linear (EL), as well as Non-linear (NL) ground response analysis, has been carried out with Masing Hysteretic Re/Unloading formulation for comparison. Commercially available DEEPSOIL v. 7.0 software is used for this analysis. In this study an attempt is made to quantify ground response regarding generated acceleration time-history at top of the soil column, Response spectra calculation at 5 % damping and Fourier amplitude spectrum calculation. Moreover, the variation of Peak Ground Acceleration (PGA), Maximum Displacement, Maximum Strain (in %), Maximum Stress Ratio, Mobilized Shear Stress with depth is also calculated. From the study, PGA values estimated in rocky strata are nearly same as bedrock motion and marginal amplification is observed in sandy silt and silty clays by both analyses. The NL analysis gives conservative results of maximum displacement as compared to EL analysis. Maximum strain predicted by both studies is very close to each other. And overall NL analysis is more efficient and realistic because it follows the actual hyperbolic stress-strain relationship, considers stiffness degradation and mobilizes stresses generated due to pore water pressure.Keywords: DEEPSOIL v 7.0, ground response analysis, pressure-dependent modified Kodner Zelasko model, MKZ model, response spectra, shear wave velocity
Procedia PDF Downloads 136140 Driving Environmental Quality through Fuel Subsidy Reform in Nigeria
Authors: O. E. Akinyemi, P. O. Alege, O. O. Ajayi, L. A. Amaghionyediwe, A. A. Ogundipe
Abstract:
Nigeria as an oil-producing developing country in Africa is one of the many countries that had been subsidizing consumption of fossil fuel. Despite the numerous advantage of this policy ranging from increased energy access, fostering economic and industrial development, protecting the poor households from oil price shocks, political considerations, among others; they have been found to impose economic cost, wasteful, inefficient, create price distortions discourage investment in the energy sector and contribute to environmental pollution. These negative consequences coupled with the fact that the policy had not been very successful at achieving some of its stated objectives, led to a number of organisations and countries such as the Group of 7 (G7), World Bank, International Monetary Fund (IMF), International Energy Agency (IEA), Organisation for Economic Co-operation and Development (OECD), among others call for global effort towards reforming fossil fuel subsidies. This call became necessary in view of seeking ways to harmonise certain existing policies which may by design hamper current effort at tackling environmental concerns such as climate change. This is in addition to driving a green growth strategy and low carbon development in achieving sustainable development. The energy sector is identified to play a vital role. This study thus investigates the prospects of using fuel subsidy reform as a viable tool in driving an economy that de-emphasizes carbon growth in Nigeria. The method used is the Johansen and Engle-Granger two-step Co-integration procedure in order to investigate the existence or otherwise of a long-run equilibrium relationship for the period 1971 to 2011. Its theoretical framework is rooted in the Environmental Kuznet Curve (EKC) hypothesis. In developing three case scenarios (case of subsidy payment, no subsidy payment and effective subsidy), findings from the study supported evidence of a long run sustainable equilibrium model. Also, estimation results reflected that the first and the second scenario do not significantly influence the indicator of environmental quality. The implication of this is that in reforming fuel subsidy to drive environmental quality for an economy like Nigeria, strong and effective regulatory framework (measure that was interacted with fuel subsidy to yield effective subsidy) is essential.Keywords: environmental quality, fuel subsidy, green growth, low carbon growth strategy
Procedia PDF Downloads 326139 H2 Permeation Properties of a Catalytic Membrane Reactor in Methane Steam Reforming Reaction
Authors: M. Amanipour, J. Towfighi, E. Ganji Babakhani, M. Heidari
Abstract:
Cylindrical alumina microfiltration membrane (GMITM Corporation, inside diameter=9 mm, outside diameter=13 mm, length= 50 mm) with an average pore size of 0.5 micrometer and porosity of about 0.35 was used as the support for membrane reactor. This support was soaked in boehmite sols, and the mean particle size was adjusted in the range of 50 to 500 nm by carefully controlling hydrolysis time, and calcined at 650 °C for two hours. This process was repeated with different boehmite solutions in order to achieve an intermediate layer with an average pore size of about 50 nm. The resulting substrate was then coated with a thin and dense layer of silica by counter current chemical vapour deposition (CVD) method. A boehmite sol with 10 wt.% of nickel which was prepared by a standard procedure was used to make the catalytic layer. BET, SEM, and XRD analysis were used to characterize this layer. The catalytic membrane reactor was placed in an experimental setup to evaluate the permeation and hydrogen separation performance for a steam reforming reaction. The setup consisted of a tubular module in which the membrane was fixed, and the reforming reaction occurred at the inner side of the membrane. Methane stream, diluted with nitrogen, and deionized water with a steam to carbon (S/C) ratio of 3.0 entered the reactor after the reactor was heated up to 500 °C with a specified rate of 2 °C/ min and the catalytic layer was reduced at presence of hydrogen for 2.5 hours. Nitrogen flow was used as sweep gas through the outer side of the reactor. Any liquid produced was trapped and separated at reactor exit by a cold trap, and the produced gases were analyzed by an on-line gas chromatograph (Agilent 7890A) to measure total CH4 conversion and H2 permeation. BET analysis indicated uniform size distribution for catalyst with average pore size of 280 nm and average surface area of 275 m2.g-1. Single-component permeation tests were carried out for hydrogen, methane, and carbon dioxide at temperature range of 500-800 °C, and the results showed almost the same permeance and hydrogen selectivity values for hydrogen as the composite membrane without catalytic layer. Performance of the catalytic membrane was evaluated by applying membranes as a membrane reactor for methane steam reforming reaction at gas hourly space velocity (GHSV) of 10,000 h−1 and 2 bar. CH4 conversion increased from 50% to 85% with increasing reaction temperature from 600 °C to 750 °C, which is sufficiently above equilibrium curve at reaction conditions, but slightly lower than membrane reactor with packed nickel catalytic bed because of its higher surface area compared to the catalytic layer.Keywords: catalytic membrane, hydrogen, methane steam reforming, permeance
Procedia PDF Downloads 256138 Detailed Analysis of Mechanism of Crude Oil and Surfactant Emulsion
Authors: Riddhiman Sherlekar, Umang Paladia, Rachit Desai, Yash Patel
Abstract:
A number of surfactants which exhibit ultra-low interfacial tension and an excellent microemulsion phase behavior with crude oils of low to medium gravity are not sufficiently soluble at optimum salinity to produce stable aqueous solutions. Such solutions often show phase separation after a few days at reservoir temperature, which does not suffice the purpose and the time is short when compared to the residence time in a reservoir for a surfactant flood. The addition of polymer often exacerbates the problem although the poor stability of the surfactant at high salinity remains a pivotal issue. Surfactants such as SDS, Ctab with large hydrophobes produce lowest IFT, but are often not sufficiently water soluble at desired salinity. Hydrophilic co-solvents and/or co-surfactants are needed to make the surfactant-polymer solution stable at the desired salinity. This study focuses on contrasting the effect of addition of a co-solvent in stability of a surfactant –oil emulsion. The idea is to use a co-surfactant to increase stability of an emulsion. Stability of the emulsion is enhanced because of creation of micro-emulsion which is verified both visually and with the help of particle size analyzer at varying concentration of salinity, surfactant and co-surfactant. A lab-experimental method description is provided and the method is described in detail to permit readers to emulate all results. The stability of the oil-water emulsion is visualized with respect to time, temperature, salinity of the brine and concentration of the surfactant. Nonionic surfactant TX-100 when used as a co-surfactant increases the stability of the oil-water emulsion. The stability of the prepared emulsion is checked by observing the particle size distribution. For stable emulsion in volume% vs particle size curve, the peak should be obtained for particle size of 5-50 nm while for the unstable emulsion a bigger sized particles are observed. The UV-Visible spectroscopy is also used to visualize the fraction of oil that plays important role in the formation of micelles in stable emulsion. This is important as the study will help us to decide applicability of the surfactant based EOR method for a reservoir that contains a specific type of crude. The use of nonionic surfactant as a co-surfactant would also increase the efficiency of surfactant EOR. With the decline in oil discoveries during the last decades it is believed that EOR technologies will play a key role to meet the energy demand in years to come. Taking this into consideration, the work focuses on the optimization of the secondary recovery(Water flooding) with the help of surfactant and/or co-surfactants by creating desired conditions in the reservoir.Keywords: co-surfactant, enhanced oil recovery, micro-emulsion, surfactant flooding
Procedia PDF Downloads 252137 Groundwater Potential Mapping using Frequency Ratio and Shannon’s Entropy Models in Lesser Himalaya Zone, Nepal
Authors: Yagya Murti Aryal, Bipin Adhikari, Pradeep Gyawali
Abstract:
The Lesser Himalaya zone of Nepal consists of thrusting and folding belts, which play an important role in the sustainable management of groundwater in the Himalayan regions. The study area is located in the Dolakha and Ramechhap Districts of Bagmati Province, Nepal. Geologically, these districts are situated in the Lesser Himalayas and partly encompass the Higher Himalayan rock sequence, which includes low-grade to high-grade metamorphic rocks. Following the Gorkha Earthquake in 2015, numerous springs dried up, and many others are currently experiencing depletion due to the distortion of the natural groundwater flow. The primary objective of this study is to identify potential groundwater areas and determine suitable sites for artificial groundwater recharge. Two distinct statistical approaches were used to develop models: The Frequency Ratio (FR) and Shannon Entropy (SE) methods. The study utilized both primary and secondary datasets and incorporated significant role and controlling factors derived from field works and literature reviews. Field data collection involved spring inventory, soil analysis, lithology assessment, and hydro-geomorphology study. Additionally, slope, aspect, drainage density, and lineament density were extracted from a Digital Elevation Model (DEM) using GIS and transformed into thematic layers. For training and validation, 114 springs were divided into a 70/30 ratio, with an equal number of non-spring pixels. After assigning weights to each class based on the two proposed models, a groundwater potential map was generated using GIS, classifying the area into five levels: very low, low, moderate, high, and very high. The model's outcome reveals that over 41% of the area falls into the low and very low potential categories, while only 30% of the area demonstrates a high probability of groundwater potential. To evaluate model performance, accuracy was assessed using the Area under the Curve (AUC). The success rate AUC values for the FR and SE methods were determined to be 78.73% and 77.09%, respectively. Additionally, the prediction rate AUC values for the FR and SE methods were calculated as 76.31% and 74.08%. The results indicate that the FR model exhibits greater prediction capability compared to the SE model in this case study.Keywords: groundwater potential mapping, frequency ratio, Shannon’s Entropy, Lesser Himalaya Zone, sustainable groundwater management
Procedia PDF Downloads 81136 The Quantitative Analysis of the Influence of the Superficial Abrasion on the Lifetime of the Frog Rail
Authors: Dong Jiang
Abstract:
Turnout is the essential equipment on the railway, which also belongs to one of the strongest demanded infrastructural facilities of railway on account of the more seriously frog rail failures. In cooperation with Germany Company (DB Systemtechnik AG), our research team focuses on the quantitative analysis about the frog rails to predict their lifetimes. Moreover, the suggestions for the timely and effective maintenances are made to improve the economy of the frog rails. The lifetime of the frog rail depends strongly on the internal damage of the running surface until the breakages occur. On the basis of Hertzian theory of the contact mechanics, the dynamic loads of the running surface are calculated in form of the contact pressures on the running surface and the equivalent tensile stress inside the running surface. According to material mechanics, the strength of the frog rail is determined quantitatively in form of the Stress-cycle (S-N) curve. Under the interaction between the dynamic loads and the strength, the internal damage of the running surface is calculated by means of the linear damage hypothesis of the Miner’s rule. The emergence of the first Breakage on the running surface is to be defined as the failure criterion that the damage degree equals 1.0. From the microscopic perspective, the running surface of the frog rail is divided into numerous segments for the detailed analysis. The internal damage of the segment grows slowly in the beginning and disproportionately quickly in the end until the emergence of the breakage. From the macroscopic perspective, the internal damage of the running surface develops simply always linear along the lifetime. With this linear growth of the internal damages, the lifetime of the frog rail could be predicted simply through the immediate introduction of the slope of the linearity. However, the superficial abrasion plays an essential role in the results of the internal damages from the both perspectives. The influences of the superficial abrasion on the lifetime are described in form of the abrasion rate. It has two contradictory effects. On the one hand, the insufficient abrasion rate causes the concentration of the damage accumulation on the same position below the running surface to accelerate the rail failure. On the other hand, the excessive abrasion rate advances the disappearance of the head hardened surface of the frog rail to result in the untimely breakage on the surface. Thus, the relationship between the abrasion rate and the lifetime is subdivided into an initial phase of the increased lifetime and a subsequent phase of the more rapid decreasing lifetime with the continuous growth of the abrasion rate. Through the compensation of these two effects, the critical abrasion rate is discussed to reach the optimal lifetime.Keywords: breakage, critical abrasion rate, frog rail, internal damage, optimal lifetime
Procedia PDF Downloads 225