Search results for: modified predictive routing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3569

Search results for: modified predictive routing

839 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction

Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun

Abstract:

The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.

Keywords: usability, qualitative data, text-processing algorithm, natural language processing

Procedia PDF Downloads 260
838 Government Size and Economic Growth: Testing the Non-Linear Hypothesis for Nigeria

Authors: R. Santos Alimi

Abstract:

Using time-series techniques, this study empirically tested the validity of existing theory which stipulates there is a nonlinear relationship between government size and economic growth; such that government spending is growth-enhancing at low levels but growth-retarding at high levels, with the optimal size occurring somewhere in between. This study employed three estimation equations. First, for the size of government, two measures are considered as follows: (i) share of total expenditures to gross domestic product, (ii) share of recurrent expenditures to gross domestic product. Second, the study adopted real GDP (without government expenditure component), as a variant measure of economic growth other than the real total GDP, in estimating the optimal level of government expenditure. The study is based on annual Nigeria country-level data for the period 1970 to 2012. Estimation results show that the inverted U-shaped curve exists for the two measures of government size and the estimated optimum shares are 19.81% and 10.98%, respectively. Finally, with the adoption of real GDP (without government expenditure component), the optimum government size was found to be 12.58% of GDP. Our analysis shows that the actual share of government spending on average (2000 - 2012) is about 13.4%.This study adds to the literature confirming that the optimal government size exists not only for developed economies but also for developing economy like Nigeria. Thus, a public intervention threshold level that fosters economic growth is a reality; beyond this point economic growth should be left in the hands of the private sector. This finding has a significant implication for the appraisal of government spending and budgetary policy design.

Keywords: public expenditure, economic growth, optimum level, fully modified OLS

Procedia PDF Downloads 398
837 Intentionality and Context in the Paradox of Reward and Punishment in the Meccan Surahs

Authors: Asmaa Fathy Mohamed Desoky

Abstract:

The subject of this research is the inference of intentionality and context from the verses of the Meccan surahs, which include the paradox of reward and punishment, applied to the duality of disbelief and faith; The Holy Quran is the most important sacred linguistic reference in the Arabic language because it is rich in all the rules of the language in addition to the linguistic miracle. the Quranic text is a first-class intentional text, sent down to convey something to the recipient (Muhammad first and then communicates it to Muslims) and influence and convince him, which opens the door to many Ijtihad; a desire to reach the will of Allah and his intention from his words Almighty. Intentionality as a term is one of the most important deliberative terms, but it will be modified to suit the Quranic discourse, especially since intentionality is related to intention-as it turned out earlier - that is, it turns the reader or recipient into a predictor of the unseen, and this does not correspond to the Quranic discourse. Hence, in this research, a set of dualities will be identified that will be studied in order to clarify the meaning of them according to the opinions of previous interpreters in accordance with the sanctity of the Quranic discourse, which is intentionally related to the dualities of reward and punishment, such as: the duality of disbelief and faith, noting that it is a duality that combines opposites and Paradox on one level, because it may be an external paradox between action and reaction, and may be an internal paradox in matters related to faith, and may be a situational paradox in a specific event or a certain fact. It should be noted that the intention of the Qur'anic text is fully realized in form and content, in whole and in part, and this research includes a presentation of some applied models of the issues of intention and context that appear in the verses of the paradox of reward and punishment in the Meccan surahs in Quraan.

Keywords: intentionality, context, the paradox, reward, punishment, Meccan surahs

Procedia PDF Downloads 47
836 Enhancement of 2, 4-Dichlorophenoxyacetic Acid Solubility via Solid Dispersion Technique

Authors: Tamer M. Shehata, Heba S. Elsewedy, Mashel Al Dosary, Alaa Elshehry, Mohamed A. Khedr, Maged E. Mohamed

Abstract:

Objective: 2,4-Dichlorophenoxy acetic acid (2,4-D) is a well-known herbicide widely used as a weed killer. Recently, 2,4-D was rediscovered as a new anti-inflammatory agent through in silico as well as in-vivo experiments. However, poor solubility of 2,4-D could represent a problems during pharmaceutical development in addition to lower bioavailability. Solid dispersion (SD) refers to a group of solid products consisting of at least two different components, usually a hydrophobic drug and hydrophilic matrix. It is well known technique for enhancing drug solubility. Therefore, selecting SD as a tool for enhancing 2,4-D could be of great interest to the formulator. Method: In our project, several polymers were investigated (such as PEG, HPMC, citric acid and others) in addition to drug polymer ratios and its effect on solubility. Evaluation of drug polymer interaction was investigated through both Fourier Transform Infrared (FTIR) and Differential Scanning Calorimetry (DSC). Finally, in-vivo evaluation was performed for the best selected preparation through inflammatory response of rat induce hind paw. Results: Results indicated that, citric acid 2,4-D and in ratio of 0.75 : 1 showed modified the dissolution profile of the drug. The FTIR resltes indicated no significant chemical interaction, however DSC showed shifting of the drug melting point. Finally, Carragenan induced rat hind paw edema showed significant reduction of the drug solid dispersion in comparison to the pure drug, indicating rapid and complete absorption of the drug in solid dispersion form. Conclusion: Solid dispersion technology can be utilized efficiently to enhance the solubility of 2,4-D.

Keywords: solid dispersion, 2, 4-D solubility, carragenan induced edema

Procedia PDF Downloads 416
835 Land-Use Suitability Analysis for Merauke Agriculture Estates

Authors: Sidharta Sahirman, Ardiansyah, Muhammad Rifan, Edy-Melmambessy

Abstract:

Merauke district in Papua, Indonesia has a strategic position and natural potential for the development of agricultural industry. The development of agriculture in this region is being accelerated as part of Indonesian Government’s declaration announcing Merauke as one of future national food barns. Therefore, land-use suitability analysis for Merauke need to be performed. As a result, the mapping for future agriculture-based industries can be done optimally. In this research, a case study is carried out in Semangga sub district. The objective of this study is to determine the suitability of Merauke land for some food crops. A modified agro-ecological zoning is applied to reach the objective. In this research, land cover based on satellite imagery is combined with soil, water and climate survey results to come up with preliminary zoning. Considering the special characteristics of Merauke community, the agricultural zoning maps resulted based on those inputs will be combined with socio-economic information and culture to determine the final zoning map for agricultural industry in Merauke. Examples of culture are customary rights of local residents and the rights of local people and their own local food patterns. This paper presents the results of first year of the two-year research project funded by The Indonesian Government through MP3EI schema. It shares the findings of land cover studies, the distribution of soil physical and chemical parameters, as well as suitability analysis of Semangga sub-district for five different food plants.

Keywords: agriculture, agro-ecological, Merauke, zoning

Procedia PDF Downloads 283
834 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning

Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz

Abstract:

Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.

Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics

Procedia PDF Downloads 91
833 Psychological Variables Predicting Academic Achievement in Argentinian Students: Scales Development and Recent Findings

Authors: Fernandez liporace, Mercedes Uriel Fabiana

Abstract:

Academic achievement in high school and college students is currently a matter of concern. National and international assessments show high schoolers as low achievers, and local statistics indicate alarming dropout percentages in this educational level. Even so, 80% of those students intend attending higher education. On the other hand, applications to Public National Universities are free and non-selective by examination procedures. Though initial registrations are massive (307.894 students), only 50% of freshmen pass their first year classes, and 23% achieves a degree. Low performances use to be a common problem. Hence, freshmen adaptation, their adjustment, dropout and low academic achievement arise as topics of agenda. Besides, the hinge between high school and college must be examined in depth, in order to get an integrated and successful path from one educational stratum to the other. Psychology aims at developing two main research lines to analyse the situation. One regarding psychometric scales, designing and/or adapting tests, examining their technical properties and their theoretical validity (e.g., academic motivation, learning strategies, learning styles, coping, perceived social support, parenting styles and parental consistency, paradoxical personality as correlated to creative skills, psychopathological symptomatology). The second research line emphasizes relationships within the variables measured by the former scales, facing the formulation and testing of predictive models of academic achievement, establishing differences by sex, age, educational level (high school vs college), and career. Pursuing these goals, several studies were carried out in recent years, reporting findings and producing assessment technology useful to detect students academically at risk as well as good achievers. Multiple samples were analysed totalizing more than 3500 participants (2500 from college and 1000 from high school), including descriptive, correlational, group differences and explicative designs. A brief on the most relevant results is presented. Providing information to design specific interventions according to every learner’s features and his/her educational environment comes up as a mid-term accomplishment. Furthermore, that information might be helpful to adapt curricula by career, as well as for implementing special didactic strategies differentiated by sex and personal characteristics.

Keywords: academic achievement, higher education, high school, psychological assessment

Procedia PDF Downloads 348
832 Opto-Electronic Properties and Structural Phase Transition of Filled-Tetrahedral NaZnAs

Authors: R. Khenata, T. Djied, R. Ahmed, H. Baltache, S. Bin-Omran, A. Bouhemadou

Abstract:

We predict structural, phase transition as well as opto-electronic properties of the filled-tetrahedral (Nowotny-Juza) NaZnAs compound in this study. Calculations are carried out by employing the full potential (FP) linearized augmented plane wave (LAPW) plus local orbitals (lo) scheme developed within the structure of density functional theory (DFT). Exchange-correlation energy/potential (EXC/VXC) functional is treated using Perdew-Burke and Ernzerhof (PBE) parameterization for generalized gradient approximation (GGA). In addition to Trans-Blaha (TB) modified Becke-Johnson (mBJ) potential is incorporated to get better precision for optoelectronic properties. Geometry optimization is carried out to obtain the reliable results of the total energy as well as other structural parameters for each phase of NaZnAs compound. Order of the structural transitions as a function of pressure is found as: Cu2Sb type → β → α phase in our study. Our calculated electronic energy band structures for all structural phases at the level of PBE-GGA as well as mBJ potential point out; NaZnAs compound is a direct (Γ–Γ) band gap semiconductor material. However, as compared to PBE-GGA, mBJ potential approximation reproduces higher values of fundamental band gap. Regarding the optical properties, calculations of real and imaginary parts of the dielectric function, refractive index, reflectivity coefficient, absorption coefficient and energy loss-function spectra are performed over a photon energy ranging from 0.0 to 30.0 eV by polarizing incident radiation in parallel to both [100] and [001] crystalline directions.

Keywords: NaZnAs, FP-LAPW+lo, structural properties, phase transition, electronic band-structure, optical properties

Procedia PDF Downloads 408
831 Nanoparticulated (U,Gd)O2 Characterization

Authors: A. Fernandez Zuvich, I. Gana Watkins, H. Zolotucho, H. Troiani, A. Caneiro, M. Prado, A. L. Soldati

Abstract:

The study of actinide nanoparticles (NPs) has attracted the attention of the scientific community not only because the lack of information about their ecotoxicological effects but also because the use of NPs could open a new way in the production of nuclear energy. Indeed, it was recently demonstrated that UO2 NPs sintered pellets exhibit closed porosity with improved fission gas retention and radiation-tolerance , ameliorated mechanical properties, and less detriment of the thermal conductivity upon use, making them an interesting option for new nuclear fuels. In this work, we used a combination of diffraction and microscopy tools to characterize the morphology, the crystalline structure and the composition of UO2 nanoparticles doped with 10%wt Gd2O3. The particles were synthesized by a modified sol-gel method at low temperatures. X-ray Diffraction (XRD) studies determined the presence of a unique phase with the cubic structure and Fm3m spatial group, supporting that Gd atoms substitute U atoms in the fluorite structure of UO2. In addition, Field Emission Gun Scanning (FEG-SEM) and Transmission (FEG-TEM) Electron Microscopy images revealed the presence of micrometric agglomerates of nanoparticles, with rounded morphology and an average crystallite size < 50 nm. Energy Dispersive Spectroscopy (EDS) coupled to TEM determined the presence of Gd in all the analyzed crystallites. Besides, FEG-SEM-EDS showed a homogeneous concentration distribution at the micrometer scale indicating that the small size of the crystallites compensates the variation in composition by averaging a large number of crystallites. These techniques, as combined tools resulted thus essential to find out details of morphology and composition distribution at the sub-micrometer scale, and set a standard for developing and analyzing nanoparticulated nuclear fuels.

Keywords: actinide nanoparticles, burnable poison, nuclear fuel, sol-gel

Procedia PDF Downloads 309
830 Strength Properties of Cement Mortar with Dark Glass Waste Powder as a Partial Sand Replacement

Authors: Ng Wei Yan, Lim Jee Hock, Lee Foo Wei, Mo Kim Hung, Yip Chun Chieh

Abstract:

The burgeoning accumulation of glass waste in Malaysia, particularly from the food and beverage industry, has become a prominent environmental concern, with disposal sites reaching saturation. This study introduces a distinct approach to addressing the twin challenges of landfill scarcity and natural resource conservation by repurposing discarded glass bottle waste into a viable construction material. The research presents a comprehensive evaluation of the strength characteristics of cement mortar when dark glass waste powder is used as a partial sand replacement. The experimental investigation probes the density, flow spread diameter, and key strength parameters—including compressive, splitting tensile, and flexural strengths—of the modified cement mortar. Remarkably, results indicate that a full replacement of sand with glass waste powder significantly improves the material's strength attributes. A specific mixture with a cement/sand/water ratio of 1:5:1.24 was found to be optimal, yielding an impressive compressive strength of 7 MPa at the 28-day mark, accompanied by a favourable 200 mm spread diameter in flow table tests. The findings of this study underscore the dual benefits of utilizing glass waste powder in cement mortar: mitigating Malaysia's glass waste dilemma and enhancing the performance of construction materials such as bricks and concrete products. Consequently, the research validates the premise that increasing the incorporation of glass waste as a sand substitute promotes not only environmental sustainability but also material innovation in the construction industry.

Keywords: glass waste, strength properties, cement mortar, environmental friendly

Procedia PDF Downloads 39
829 Preschool Teachers' Teaching Performance in Relation to Their Technology and 21st Century Skills

Authors: Vida Dones-Jimenez

Abstract:

The main purpose of this study is to determine the preschool teachers’ technology and 21st-century skills and its relation to teachers’ performance. The participants were 94 preschool teachers and 59 school administrators from the CDAPS member schools. The data were collected by using 21st Century Skill, developed by ISSA (2009), Technology Skills of Teachers Survey (2013) and Teacher Performance Evaluation Criteria and Descriptors (200) was modified by the current researcher to suit the needs of her study and was administered personally by her. The surveys were designed to measure the participants’ 21st-century skills, technology skills and teaching performance. The result of the study indicates that the majority of the preschool teachers are the college graduate. Most of them are in the teaching profession for 0 to 10 years. It also indicated that the majority of the school administrators are masters’ degree holder. The preschool teachers are outstanding in their teaching performance as rated by the school administrators. The preschool teachers are skillful in using technology, and they are very skillful in executing the 21st-century skills in teaching. It was further determined that no significant difference between preschool teachers 21st-century skill in regards to educational attainment same as with the number of years in teaching, likewise with their technology skills. Furthermore, the study has shown that there is a very weak relationship between technology and 21st-century skills of preschool teachers, a weak relationship between technology skills and teaching performance and a very weak relationship between 21st-century skills and teaching performance were also established. The study recommends that the preschool teachers should be encouraged to enroll in master degree programs. School administrators should support the implementation of newly adopted technologies and support faculty members at various levels of use and experience. It is also recommended that regular review of the professional development plan be undertaken to upgrade 21st-century teaching and learning skills of preschool teachers.

Keywords: preschool teacher, teaching performance, technology, 21st century skills

Procedia PDF Downloads 381
828 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 104
827 Modeling Water Inequality and Water Security: The Role of Water Governance

Authors: Pius Babuna, Xiaohua Yang, Roberto Xavier Supe Tulcan, Bian Dehui, Mohammed Takase, Bismarck Yelfogle Guba, Chuanliang Han, Doris Abra Awudi, Meishui Lia

Abstract:

Water inequality, water security, and water governance are fundamental parameters that affect the sustainable use of water resources. Through policy formulation and decision-making, water governance determines both water security and water inequality. Largely, where water inequality exists, water security is undermined through unsustainable water use practices that lead to pollution of water resources, conflicts, hoarding of water, and poor sanitation. Incidentally, the interconnectedness of water governance, water inequality, and water security has not been investigated previously. This study modified the Gini coefficient and used a Logistics Growth of Water Resources (LGWR) Model to access water inequality and water security mathematically, and discussed the connected role of water governance. We tested the validity of both models by calculating the actual water inequality and water security of Ghana. We also discussed the implications of water inequality on water security and the overarching role of water governance. The results show that regional water inequality is widespread in some parts. The Volta region showed the highest water inequality (Gini index of 0.58), while the central region showed the lowest (Gini index of 0.15). Water security is moderately sustainable. The use of water resources is currently stress-free. It was estimated to maintain such status until 2132 ± 18, when Ghana will consume half of the current total water resources of 53.2 billion cubic meters. Effectively, water inequality is a threat to water security, results in poverty, under-development heightens tensions in water use, and causes instability. With proper water governance, water inequality can be eliminated through formulating and implementing approaches that engender equal allocation and sustainable use of water resources.

Keywords: water inequality, water security, water governance, Gini coefficient, moran index, water resources management

Procedia PDF Downloads 107
826 Flow and Heat Transfer Analysis of Copper-Water Nanofluid with Temperature Dependent Viscosity past a Riga Plate

Authors: Fahad Abbasi

Abstract:

Flow of electrically conducting nanofluids is of pivotal importance in countless industrial and medical appliances. Fluctuations in thermophysical properties of such fluids due to variations in temperature have not received due attention in the available literature. Present investigation aims to fill this void by analyzing the flow of copper-water nanofluid with temperature dependent viscosity past a Riga plate. Strong wall suction and viscous dissipation have also been taken into account. Numerical solutions for the resulting nonlinear system have been obtained. Results are presented in the graphical and tabular format in order to facilitate the physical analysis. An estimated expression for skin friction coefficient and Nusselt number are obtained by performing linear regression on numerical data for embedded parameters. Results indicate that the temperature dependent viscosity alters the velocity, as well as the temperature of the nanofluid and, is of considerable importance in the processes where high accuracy is desired. Addition of copper nanoparticles makes the momentum boundary layer thinner whereas viscosity parameter does not affect the boundary layer thickness. Moreover, the regression expressions indicate that magnitude of rate of change in effective skin friction coefficient and Nusselt number with respect to nanoparticles volume fraction is prominent when compared with the rate of change with variable viscosity parameter and modified Hartmann number.

Keywords: heat transfer, peristaltic flows, radially varying magnetic field, curved channel

Procedia PDF Downloads 147
825 Challenges of Management of Acute Pancreatitis in Low Resource Setting

Authors: Md. Shakhawat Hossain, Jimma Hossain, Md. Naushad Ali

Abstract:

Acute pancreatitis is a dangerous medical emergency in the practice of gastroenterology. Management of acute pancreatitis needs multidisciplinary approach with support starts from emergency to ICU. So, there is a chance of mismanagement in every steps, especially in low resource settings. Other factors such as patient’s financial condition, education, social custom, transport facility, referral system from periphery may also challenge the current guidelines for management. The present study is intended to determine the clinico-pathological profile, severity assessment and challenges of management of acute pancreatitis in a government laid tertiary care hospital to image the real scenario of management in a low resource place. A total 100 patients of acute pancreatitis were studied in this prospective study, held in the Department of Gastroenterology, Rangpur medical college hospital, Bangladesh from July 2017 to July 2018 within one year. Regarding severity, 85 % of the patients were mild, whereas 13 were moderately severe, and 2 had severe acute pancreatitis according to the revised Atlanta criteria. The most common etiologies of acute pancreatitis in our study were gall stone (15%) and biliary sludge (15%), whereas 54% were idiopathic. The most common challenges we faced were delay in hospital admission (59%) and delay in hospital diagnosis (20%). Others are non-adherence of patient party, and lack of investigation facility, physician’s poor knowledge about current guidelines. We were able to give early aggressive fluid to only 18% of patients as per current guideline. Conclusion: Management of acute pancreatitis as per guideline is challenging when optimum facility is lacking. So, modified guidelines for assessment and management of acute pancreatitis should be prepared for low resource setting.

Keywords: acute pancreatitis, challenges of management, severity, prognosis

Procedia PDF Downloads 106
824 Strain Sensing Seams for Monitoring Body Movement

Authors: Sheilla Atieno Odhiambo, Simona Vasile, Alexandra De Raeve, Ann Schwarz

Abstract:

Strain sensing seams have been developed by integrating conductive sewing threads in different types of seams design on a fabric typical for sports clothing using sewing technology. The aim is to have a simple integrated textile strain sensor that can be applied to sports clothing to monitor the movements of the upper body parts of the user during sports. Different types of commercially available sewing threads were used as the bobbin thread in the production of different architectural seam sensors. These conductive sewing threads have been integrated into seams in particular designs using specific seam types. Some of the threads are delicate and needed to be laid into the seam with as little friction as possible and less tension; thus, they could only be sewn in as the bobbin thread and not the needle thread. Stitch type 304; 406; 506; 601;602; 605. were produced. The seams were made on a fabric of 80% polyamide 6.6 and 20% elastane. The seams were cycled(stretch-release-stretch) for five cycles and up to 44 cycles following EN ISO 14704-1: 2005 (modified), using a tensile instrument and the changes in the resistance of the seams with time were recorded using Agilent meter U1273A. Both experiments were conducted simultaneously on the same seam sample. Sensing functionality, among which is sensor gauge and reliability, were evaluated on the promising sensor seams. The results show that the sensor seams made from HC Madeira 40 conductive yarns performed better inseam stitch 304 and 602 compared to the other combination of stitch type and conductive sewing threads. These sensing seams 304, 406 and 602 will further be interconnected to our developed processing and communicating unit and further integrated into a sports clothing prototype that can track body posture. This research is done within the framework of the project SmartSeam.

Keywords: conductive sewing thread, sensing seams, smart seam, sewing technology

Procedia PDF Downloads 167
823 Optimization and Energy Management of Hybrid Standalone Energy System

Authors: T. M. Tawfik, M. A. Badr, E. Y. El-Kady, O. E. Abdellatif

Abstract:

Electric power shortage is a serious problem in remote rural communities in Egypt. Over the past few years, electrification of remote communities including efficient on-site energy resources utilization has achieved high progress. Remote communities usually fed from diesel generator (DG) networks because they need reliable energy and cheap fresh water. The main objective of this paper is to design an optimal economic power supply from hybrid standalone energy system (HSES) as alternative energy source. It covers energy requirements for reverse osmosis desalination unit (DU) located in National Research Centre farm in Noubarya, Egypt. The proposed system consists of PV panels, Wind Turbines (WT), Batteries, and DG as a backup for supplying DU load of 105.6 KWh/day rated power with 6.6 kW peak load operating 16 hours a day. Optimization of HSES objective is selecting the suitable size of each of the system components and control strategy that provide reliable, efficient, and cost-effective system using net present cost (NPC) as a criterion. The harmonization of different energy sources, energy storage, and load requirements are a difficult and challenging task. Thus, the performance of various available configurations is investigated economically and technically using iHOGA software that is based on genetic algorithm (GA). The achieved optimum configuration is further modified through optimizing the energy extracted from renewable sources. Effective minimization of energy charging the battery ensures that most of the generated energy directly supplies the demand, increasing the utilization of the generated energy.

Keywords: energy management, hybrid system, renewable energy, remote area, optimization

Procedia PDF Downloads 184
822 Air Breakdown Voltage Prediction in Post-arcing Conditions for Compact Circuit Breakers

Authors: Jing Nan

Abstract:

The air breakdown voltage in compact circuit breakers is a critical factor in the design and reliability of electrical distribution systems. This voltage determines the threshold at which the air insulation between conductors will fail or 'break down,' leading to an arc. This phenomenon is highly sensitive to the conditions within the breaker, such as the temperature and the distance between electrodes. Typically, air breakdown voltage models have been reliable for predicting failure under standard operational temperatures. However, in conditions post-arcing, where temperatures can soar above 2000K, these models face challenges due to the complex physics of ionization and electron behaviour at such high-energy states. Building upon the foundational understanding that the breakdown mechanism is initiated by free electrons and propelled by electric fields, which lead to ionization and, potentially, to avalanche or streamer formation, we acknowledge the complexity introduced by high-temperature environments. Recognizing the limitations of existing experimental data, a notable research gap exists in the accurate prediction of breakdown voltage at elevated temperatures, typically observed post-arcing, where temperatures exceed 2000K.To bridge this knowledge gap, we present a method that integrates gap distance and high-temperature effects into air breakdown voltage assessment. The proposed model is grounded in the physics of ionization, accounting for the dynamic behaviour of free electrons which, under intense electric fields at elevated temperatures, lead to thermal ionization and potentially reach the threshold for streamer formation as Meek's criterion. Employing the Saha equation, our model calculates equilibrium electron densities, adapting to the atmospheric pressure and the hot temperature regions indicative of post-arc temperature conditions. Our model is rigorously validated against established experimental data, demonstrating substantial improvements in predicting air breakdown voltage in the high-temperature regime. This work significantly improves the predictive power for air breakdown voltage under conditions that closely mimic operational stressors in compact circuit breakers. Looking ahead, the proposed methods are poised for further exploration in alternative insulating media, like SF6, enhancing the model's utility for a broader range of insulation technologies and contributing to the future of high-temperature electrical insulation research.

Keywords: air breakdown voltage, high-temperature insulation, compact circuit breakers, electrical discharge, saha equation

Procedia PDF Downloads 57
821 Governance in the Age of Artificial intelligence and E- Government

Authors: Mernoosh Abouzari, Shahrokh Sahraei

Abstract:

Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.

Keywords: electronic government, artificial intelligence, information and communication technology., system

Procedia PDF Downloads 71
820 Modeling Flow and Deposition Characteristics of Solid CO2 during Choked Flow of CO2 Pipeline in CCS

Authors: Teng lin, Li Yuxing, Han Hui, Zhao Pengfei, Zhang Datong

Abstract:

With the development of carbon capture and storage (CCS), the flow assurance of CO2 transportation becomes more important, particularly for supercritical CO2 pipelines. The relieving system using the choke valve is applied to control the pressure in CO2 pipeline. However, the temperature of fluid would drop rapidly because of Joule-Thomson cooling (JTC), which may cause solid CO2 form and block the pipe. In this paper, a Computational Fluid Dynamic (CFD) model, using the modified Lagrangian method, Reynold's Stress Transport model (RSM) for turbulence and stochastic tracking model (STM) for particle trajectory, was developed to predict the deposition characteristic of solid carbon dioxide. The model predictions were in good agreement with the experiment data published in the literature. It can be observed that the particle distribution affected the deposition behavior. In the region of the sudden expansion, the smaller particles accumulated tightly on the wall were dominant for pipe blockage. On the contrary, the size of solid CO2 particles deposited near the outlet usually was bigger and the stacked structure was looser. According to the calculation results, the movement of the particles can be regarded as the main four types: turbulent motion close to the sudden expansion structure, balanced motion at sudden expansion-middle region, inertial motion near the outlet and the escape. Furthermore the particle deposits accumulated primarily in the sudden expansion region, reattachment region and outlet region because of the four type of motion. Also the Stokes number had an effect on the deposition ratio and it is recommended for Stokes number to avoid 3-8St.

Keywords: carbon capture and storage, carbon dioxide pipeline, gas-particle flow, deposition

Procedia PDF Downloads 346
819 Modelling and Simulating CO2 Electro-Reduction to Formic Acid Using Microfluidic Electrolytic Cells: The Influence of Bi-Sn Catalyst and 1-Ethyl-3-Methyl Imidazolium Tetra-Fluoroborate Electrolyte on Cell Performance

Authors: Akan C. Offong, E. J. Anthony, Vasilije Manovic

Abstract:

A modified steady-state numerical model is developed for the electrochemical reduction of CO2 to formic acid. The numerical model achieves a CD (current density) (~60 mA/cm2), FE-faradaic efficiency (~98%) and conversion (~80%) for CO2 electro-reduction to formic acid in a microfluidic cell. The model integrates charge and species transport, mass conservation, and momentum with electrochemistry. Specifically, the influences of Bi-Sn based nanoparticle catalyst (on the cathode surface) at different mole fractions and 1-ethyl-3-methyl imidazolium tetra-fluoroborate ([EMIM][BF4]) electrolyte, on CD, FE and CO2 conversion to formic acid is studied. The reaction is carried out at a constant concentration of electrolyte (85% v/v., [EMIM][BF4]). Based on the mass transfer characteristics analysis (concentration contours), mole ratio 0.5:0.5 Bi-Sn catalyst displays the highest CO2 mole consumption in the cathode gas channel. After validating with experimental data (polarisation curves) from literature, extensive simulations reveal performance measure: CD, FE and CO2 conversion. Increasing the negative cathode potential increases the current densities for both formic acid and H2 formations. However, H2 formations are minimal as a result of insufficient hydrogen ions in the ionic liquid electrolyte. Moreover, the limited hydrogen ions have a negative effect on formic acid CD. As CO2 flow rate increases, CD, FE and CO2 conversion increases.

Keywords: carbon dioxide, electro-chemical reduction, ionic liquids, microfluidics, modelling

Procedia PDF Downloads 124
818 Association between Severe Acidemia before Endotracheal Intubation and the Lower First Attempt Intubation Success Rate

Authors: Keiko Naito, Y. Nakashima, S. Yamauchi, Y. Kunitani, Y. Ishigami, K. Numata, M. Mizobe, Y. Homma, J. Takahashi, T. Inoue, T. Shiga, H. Funakoshi

Abstract:

Background: A presence of severe acidemia, defined as pH < 7.2, is common during endotracheal intubation for critically ill patients in the emergency department (ED). Severe acidemia is widely recognized as a predisposing factor for intubation failure. However, it is unclear that acidemic condition itself actually makes endotracheal intubation more difficult. We aimed to evaluate if a presence of severe acidemia before intubation is associated with the lower first attempt intubation success rate in the ED. Methods: This is a retrospective observational cohort study in the ED of an urban hospital in Japan. The collected data included patient demographics, such as age, sex, and body mass index, presence of one or more factors of modified LEMON criteria for predicting difficult intubation, reasons for intubation, blood gas levels, airway equipment, intubation by emergency physician or not, and the use of the rapid sequence intubation technique. Those with any of the following were excluded from the analysis: (1) no blood gas drawn before intubation, (2) cardiopulmonary arrest, and (3) under 18 years of age. The primary outcome was the first attempt intubation success rates between a severe acidemic patients (SA) group and a non-severe acidemic patients (NA) group. Logistic regression analysis was used to test the first attempt success rates for intubations between those two groups. Results: Over 5 years, a total of 486 intubations were performed; 105 in the SA group and 381 in the NA group. The univariate analysis showed that the first attempt intubation success rate was lower in the SA group than in the NA group (71.4% vs 83.5%, p < 0.01). The multivariate logistic regression analysis identified that severe acidemia was significantly associated with the first attempt intubation failure (OR 1.9, 95% CI 1.03-3.68, p = 0.04). Conclusions: A presence of severe acidemia before endotracheal intubation lowers the first attempt intubation success rate in the ED.

Keywords: acidemia, airway management, endotracheal intubation, first-attempt intubation success rate

Procedia PDF Downloads 228
817 Electrochemical Modification of Boron Doped Carbon Nanowall Electrodes for Biosensing Purposes

Authors: M. Kowalski, M. Brodowski, K. Dziabowska, E. Czaczyk, W. Bialobrzeska, N. Malinowska, S. Zoledowska, R. Bogdanowicz, D. Nidzworski

Abstract:

Boron-doped-carbon nanowall (BCNW) electrodes are recently in much interest among scientists. BCNWs are good candidates for biosensor purposes as they possess interesting electrochemical characteristics like a wide potential range and the low difference between redox peaks. Moreover, from technical parameters, they are mechanically resistant and very tough. The production process of the microwave plasma-enhanced chemical vapor deposition (MPECVD) allows boron to build into the structure of the diamond being formed. The effect is the formation of flat, long structures with sharp ends. The potential of these electrodes was checked in the biosensing field. The procedure of simple carbon electrodes modification by antibodies was adopted to BCNW for specific antigen recognition. Surface protein D deriving from H. influenzae pathogenic bacteria was chosen as a target analyte. The electrode was first modified with the aminobenzoic acid diazonium salt by electrografting (electrochemical reduction), next anti-protein D antibodies were linked via 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride/N-hydroxysuccinimide (EDC/NHS) chemistry, and free sites were blocked by BSA. Cyclic voltammetry measurements confirmed the proper electrode modification. Electrochemical impedance spectroscopy records indicated protein detection. The sensor was proven to detect protein D in femtograms. This work was supported by the National Centre for Research and Development (NCBR) TECHMATSTRATEG 1/347324/12/NCBR/ 2017.

Keywords: anti-protein D antibodies, boron-doped carbon nanowall, impedance spectroscopy, Haemophilus influenzae.

Procedia PDF Downloads 150
816 Social Influences on HIV Services Engagement among Sexual Minorities Experiencing Intersectional Stigma and Discrimination during COVID-19 Pandemic in Uganda

Authors: Simon Mwima, Evans Jennifer Mann, Agnes Nzomene, Edson Chipalo, Eusebius Small, Moses Okumu, Bosco Mukuba

Abstract:

Introduction: In Uganda, sexual minorities experience exacerbated intersectional stigma and discrimination that exposes them to elevated HIV infections and impedes access to HIV testing and PrEP with low treatment adherence. We contribute to the lack of information about sexual minorities living with HIV in Uganda by using modified social-ecological theory to explore social influences impacting HIV services engagement. Findings from focused group discussion (FGD) involving 31 sexual minorities, ages 18-25, recruited through urban HIV clinics in Kampala reveal the protective and promotive social influence within the individual and interpersonal relationships (sexual partners and peers). Further, inhibitive social influences were found within family, community, societal, and healthcare settings. During the COVID-19 pandemic, these adolescents strategically used promotive social influences to increase their engagement with HIV care services. Interviews were recorded in English, transcribed verbatim, and analyzed using Dedoose. Conclusions: The findings revealed that young people (identified as sexual minorities) strategically used promotive social influences and supported each other to improve engagement with HIV care in the context of restrictive laws in Uganda during the COVID-19-Pandemic. Future HIV prevention, treatment, and care responses could draw on how peers support each other to navigate the heavily criminalized and stigmatized settings to access healthcare services.

Keywords: HIV/AIDS services, intersectional stigma, discrimination, adolescents, sexual minorities, COVID-19 pandemic Uganda

Procedia PDF Downloads 100
815 Modified Clusterwise Regression for Pavement Management

Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella

Abstract:

Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.

Keywords: clusterwise regression, pavement management system, performance model, optimization

Procedia PDF Downloads 230
814 Polymer Nanocoatings With Enhanced Self-Cleaning and Icephobic Properties

Authors: Bartlomiej Przybyszewski, Rafal Kozera, Katarzyna Zolynska, Anna Boczkowska, Daria Pakula

Abstract:

The build-up and accumulation of dirt, ice, and snow on structural elements and vehicles is an unfavorable phenomenon, leading to economic losses and often also posing a threat to people. This problem occurs wherever the use of polymer coatings has become a standard, among others in photovoltaic farms, aviation, wind energy, and civil engineering. The accumulated pollution on the photovoltaic modules can reduce their efficiency by several percent, and snow stops power production. Accumulated ice on the blades of wind turbines or the wings of airplanes and drones disrupts the airflow by changing their shape, leading to increased drag and reduced efficiency. This results in costly maintenance and repairs. The goal of the work is to reduce or completely eliminate the accumulation of dirt, snow, and ice build-up on polymer coatings by achieving self-cleaning and icephobic properties. It is done by the use of a multi-step surface modification of the polymer nanocoatings. For this purpose, two methods of surface structuring and the preceding volumetric modification of the chemical composition with proprietary organosilicon compounds and/or mineral additives were used. To characterize the surface topography of the modified coatings, light profilometry was utilized. Measurements of the wettability parameters (static contact angle and contact angle hysteresis) on the investigated surfaces allowed to identify their wetting behavior and determine relation between hydrophobic and anti-icing properties. Ice adhesion strength was measured to assess coatings' anti-icing behavior.

Keywords: anti-icing properties, self-cleaning, polymer coatings, icephobic coatings

Procedia PDF Downloads 88
813 Finding Optimal Operation Condition in a Biological Nutrient Removal Process with Balancing Effluent Quality, Economic Cost and GHG Emissions

Authors: Seungchul Lee, Minjeong Kim, Iman Janghorban Esfahani, Jeong Tai Kim, ChangKyoo Yoo

Abstract:

It is hard to maintain the effluent quality of the wastewater treatment plants (WWTPs) under with fixed types of operational control because of continuously changed influent flow rate and pollutant load. The aims of this study is development of multi-loop multi-objective control (ML-MOC) strategy in plant-wide scope targeting four objectives: 1) maximization of nutrient removal efficiency, 2) minimization of operational cost, 3) maximization of CH4 production in anaerobic digestion (AD) for CH4 reuse as a heat source and energy source, and 4) minimization of N2O gas emission to cope with global warming. First, benchmark simulation mode is modified to describe N2O dynamic in biological process, namely benchmark simulation model for greenhouse gases (BSM2G). Then, three types of single-loop proportional-integral (PI) controllers for DO controller, NO3 controller, and CH4 controller are implemented. Their optimal set-points of the controllers are found by using multi-objective genetic algorithm (MOGA). Finally, multi loop-MOC in BSM2G is implemented and evaluated in BSM2G. Compared with the reference case, the ML-MOC with the optimal set-points showed best control performances than references with improved performances of 34%, 5% and 79% of effluent quality, CH4 productivity, and N2O emission respectively, with the decrease of 65% in operational cost.

Keywords: Benchmark simulation model for greenhouse gas, multi-loop multi-objective controller, multi-objective genetic algorithm, wastewater treatment plant

Procedia PDF Downloads 474
812 Flood Early Warning and Management System

Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare

Abstract:

The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.

Keywords: flood, modeling, HPC, FOSS

Procedia PDF Downloads 69
811 Using Dynamic Glazing to Eliminate Mechanical Cooling in Multi-family Highrise Buildings

Authors: Ranojoy Dutta, Adam Barker

Abstract:

Multifamily residential buildings are increasingly being built with large glazed areas to provide tenants with greater daylight and outdoor views. However, traditional double-glazed window assemblies can lead to significant thermal discomfort from high radiant temperatures as well as increased cooling energy use to address solar gains. Dynamic glazing provides an effective solution by actively controlling solar transmission to maintain indoor thermal comfort, without compromising the visual connection to outdoors. This study uses thermal simulations across three Canadian cities (Toronto, Vancouver and Montreal) to verify if dynamic glazing along with operable windows and ceiling fans can maintain the indoor operative temperature of a prototype southwest facing high-rise apartment unit within the ASHRAE 55 adaptive comfort range for a majority of the year, without any mechanical cooling. Since this study proposes the use of natural ventilation for cooling and the typical building life cycle is 30-40 years, the typical weather files have been modified based on accepted global warming projections for increased air temperatures by 2050. Results for the prototype apartment confirm that thermal discomfort with dynamic glazing occurs only for less than 0.7% of the year. However, in the baseline scenario with low-E glass there are up to 7% annual hours of discomfort despite natural ventilation with operable windows and improved air movement with ceiling fans.

Keywords: electrochromic glazing, multi-family housing, passive cooling, thermal comfort, natural ventilation

Procedia PDF Downloads 84
810 Enzymatic Repair Prior To DNA Barcoding, Aspirations, and Restraints

Authors: Maxime Merheb, Rachel Matar

Abstract:

Retrieving ancient DNA sequences which in return permit the entire genome sequencing from fossils have extraordinarily improved in recent years, thanks to sequencing technology and other methodological advances. In any case, the quest to search for ancient DNA is still obstructed by the damage inflicted on DNA which accumulates after the death of a living organism. We can characterize this damage into three main categories: (i) Physical abnormalities such as strand breaks which lead to the presence of short DNA fragments. (ii) Modified bases (mainly cytosine deamination) which cause errors in the sequence due to an incorporation of a false nucleotide during DNA amplification. (iii) DNA modifications referred to as blocking lesions, will halt the PCR extension which in return will also affect the amplification and sequencing process. We can clearly see that the issues arising from breakage and coding errors were significantly decreased in recent years. Fast sequencing of short DNA fragments was empowered by platforms for high-throughput sequencing, most of the coding errors were uncovered to be the consequences of cytosine deamination which can be easily removed from the DNA using enzymatic treatment. The methodology to repair DNA sequences is still in development, it can be basically explained by the process of reintroducing cytosine rather than uracil. This technique is thus restricted to amplified DNA molecules. To eliminate any type of damage (particularly those that block PCR) is a process still pending the complete repair methodologies; DNA detection right after extraction is highly needed. Before using any resources into extensive, unreasonable and uncertain repair techniques, it is vital to distinguish between two possible hypotheses; (i) DNA is none existent to be amplified to begin with therefore completely un-repairable, (ii) the DNA is refractory to PCR and it is worth to be repaired and amplified. Hence, it is extremely important to develop a non-enzymatic technique to detect the most degraded DNA.

Keywords: ancient DNA, DNA barcodong, enzymatic repair, PCR

Procedia PDF Downloads 384