Search results for: building performance rating tool
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20323

Search results for: building performance rating tool

2173 Effect of Substituting Groundnut Cake with Remnant of Food Composite on Survival and Growth of Clarias gariepinus and Oreochromis niloticus Fingerlings

Authors: M. Y. Abubakar, M. Yunisa, A. N. Muhammad

Abstract:

Constraining the production Clarias gariepinus and Oreochromis niloticus culture is the prohibitive cost of feed. We assess the performance of the species fingerlings on diets substituted with composite. Four dietary treatments (0%, 25%, 45%, and 75%) for C. gariepinus and five (0%, 25%, 50%, 75%, and whole food composite) for O. niloticus were formulated and each fed to 15 fingerlings for C. gariepinus and 10 fingerlings for O. niloticus stocked in 75ltrs plastic bowls, replicated trice in a completely randomized design. The experiment lasted 56 days. Percent survival rate was significantly (p < 0.05) higher (57.78 ± 9.69) in C. gariepinus fed diet III. The growth and nutrient utilization indices were least in the fish fed diet IV, which was significantly (p < 0.05) lower than in other treatments. Fish fed dietary treatment III, recorded the best in growth and nutrient utilization indices and was significantly higher (p < 0.05) than those fed dietary treatments I & II which were non-significant (p > 0.05) and higher than those fed 75% substitution. Better profit index was in the fish fed diet with 50% substitution level. For O. niloticus, the survival (172.62 ± 39.03) was significantly higher (p < 0.05) in those fed 25% substituted diet. For growth indices, the least performed were those fed whole composite while other treatments were non-significant (p > 0.05) different from each other. In terms of nutrient utilization, fish fed diet substituted at 0%, 25%, 50% and 75% food composite had similar food conversion ratio and protein efficiency ratio. However, there was no significant difference in the profit index among the whole treatment. It can be concluded that food composite from Sokoto house-holds can optimally replace groundnut cake up to 50% level as a protein source in the diets of Clarias gariepinus and O. niloticus fingerlings without adverse effects on survival, growth, and nutrient utilization.

Keywords: food composite, nutrient utilization, C. gariepinus, O. niloticus household, substitution levels

Procedia PDF Downloads 197
2172 Heat Transfer Analysis of a Multiphase Oxygen Reactor Heated by a Helical Tube in the Cu-Cl Cycle of a Hydrogen Production

Authors: Mohammed W. Abdulrahman

Abstract:

In the thermochemical water splitting process by Cu-Cl cycle, oxygen gas is produced by an endothermic thermolysis process at a temperature of 530oC. Oxygen production reactor is a three-phase reactor involving cuprous chloride molten salt, copper oxychloride solid reactant and oxygen gas. To perform optimal performance, the oxygen reactor requires accurate control of heat transfer to the molten salt and decomposing solid particles within the thermolysis reactor. In this paper, the scale up analysis of the oxygen reactor that is heated by an internal helical tube is performed from the perspective of heat transfer. A heat balance of the oxygen reactor is investigated to analyze the size of the reactor that provides the required heat input for different rates of hydrogen production. It is found that the helical tube wall and the service side constitute the largest thermal resistances of the oxygen reactor system. In the analysis of this paper, the Cu-Cl cycle is assumed to be heated by two types of nuclear reactor, which are HTGR and CANDU SCWR. It is concluded that using CANDU SCWR requires more heat transfer rate by 3-4 times than that when using HTGR. The effect of the reactor aspect ratio is also studied and it is found that increasing the aspect ratio decreases the number of reactors and the rate of decrease in the number of reactors decreases by increasing the aspect ratio. Comparisons between the results of this study and pervious results of material balances in the oxygen reactor show that the size of the oxygen reactor is dominated by the heat balance rather than the material balance.

Keywords: heat transfer, Cu-Cl cycle, hydrogen production, oxygen, clean energy

Procedia PDF Downloads 261
2171 Provotyping Futures Through Design

Authors: Elisabetta Cianfanelli, Maria Claudia Coppola, Margherita Tufarelli

Abstract:

Design practices throughout history return a critical understanding of society since they always conveyed values and meanings aimed at (re)framing reality by acting in everyday life: here, design gains cultural and normative character, since its artifacts, services, and environments hold the power to intercept, influence and inspire thoughts, behaviors, and relationships. In this sense, design can be persuasive, engaging in the production of worlds and, as such, acting in the space between poietics and politics so that chasing preferable futures and their aesthetic strategies becomes a matter full of political responsibility. This resonates with contemporary landscapes of radical interdependencies challenging designers to focus on complex socio-technical systems and to better support values such as equality and justice for both humans and nonhumans. In fact, it is in times of crisis and structural uncertainty that designers turn into visionaries at the service of society, envisioning scenarios and dwelling in the territories of imagination to conceive new fictions and frictions to be added to the thickness of the real. Here, design’s main tasks are to develop options, to increase the variety of choices, to cultivate its role as scout, jester, agent provocateur for the public, so that design for transformation emerges, making an explicit commitment to society, furthering structural change in a proactive and synergic manner. However, the exploration of possible futures is both a trap and a trampoline because, although it embodies a radical research tool, it raises various challenges when the design process goes further in the translation of such vision into an artefact - whether tangible or intangible -, through which it should deliver that bit of future into everyday experience. Today designers are making up new tools and practices to tackle current wicked challenges, combining their approaches with other disciplinary domains: futuring through design, thus, rises from research strands like speculative design, design fiction, and critical design, where the blending of design approaches and futures thinking brings an action-oriented and product-based approach to strategic insights. The contribution positions at the intersection of those approaches, aiming at discussing design’s tools of inquiry through which it is possible to grasp the agency of imagined futures into present time. Since futures are not remote, they actively participate in creating path-dependent decisions, crystallized into designed artifacts par excellence, prototypes, and their conceptual other, provotypes: with both being unfinished and multifaceted, the first ones are effective in reiterating solutions to problems already framed, while the second ones prove to be useful when the goal is to explore and break boundaries, bringing closer preferable futures. By focusing on some provotypes throughout history which challenged markets and, above all, social and cultural structures, the contribution’s final aim is understanding the knowledge produced by provotypes, understood as design spaces where designs’s humanistic side might help developing a deeper sensibility about uncertainty and, most of all, the unfinished feature of societal artifacts, whose experimentation would leave marks and traces to build up f(r)ictions as vital sparks of plurality and collective life.

Keywords: speculative design, provotypes, design knowledge, political theory

Procedia PDF Downloads 131
2170 Compact Dual-band 4-MIMO Antenna Elements for 5G Mobile Applications

Authors: Fayad Ghawbar

Abstract:

The significance of the Multiple Input Multiple Output (MIMO) system in the 5G wireless communication system is essential to enhance channel capacity and provide a high data rate resulting in a need for dual-polarization in vertical and horizontal. Furthermore, size reduction is critical in a MIMO system to deploy more antenna elements requiring a compact, low-profile design. A compact dual-band 4-MIMO antenna system has been presented in this paper with pattern and polarization diversity. The proposed single antenna structure has been designed using two antenna layers with a C shape in the front layer and a partial slot with a U-shaped cut in the ground to enhance isolation. The single antenna is printed on an FR4 dielectric substrate with an overall size of 18 mm×18 mm×1.6 mm. The 4-MIMO antenna elements were printed orthogonally on an FR4 substrate with a size dimension of 36 × 36 × 1.6 mm3 with zero edge-to-edge separation distance. The proposed compact 4-MIMO antenna elements resonate at 3.4-3.6 GHz and 4.8-5 GHz. The s-parameters measurement and simulation results agree, especially in the lower band with a slight frequency shift of the measurement results at the upper band due to fabrication imperfection. The proposed design shows isolation above -15 dB and -22 dB across the 4-MIMO elements. The MIMO diversity performance has been evaluated in terms of efficiency, ECC, DG, TARC, and CCL. The total and radiation efficiency were above 50 % across all parameters in both frequency bands. The ECC values were lower than 0.10, and the DG results were about 9.95 dB in all antenna elements. TARC results exhibited values lower than 0 dB with values lower than -25 dB in all MIMO elements at the dual-bands. Moreover, the channel capacity losses in the MIMO system were depicted using CCL with values lower than 0.4 Bits/s/Hz.

Keywords: compact antennas, MIMO antenna system, 5G communication, dual band, ECC, DG, TARC

Procedia PDF Downloads 143
2169 Partition of Nonylphenol between Different Compartment for Mother-Fetus Pairs and Health Effects of Newborns

Authors: Chun-Hao Lai, Yu-Fang Huang, Pei-Wei Wang, Meng-Han Lin, Mei-Lien Chen

Abstract:

Nonylphenol (NP) is a degradation product of nonylphenol ethoxylates (NPEOs). It is a well-known endocrine disruptor which may cause estrogenic effects. The growing fetus and infants are more vulnerable to exposure to NP than adults. It is important to know the levels and influences of prenatal exposure to NP. The aims of this study were (1) to determine the levels of prenatal exposure among Taiwanese, (2) to evaluate the potential risk for the infants who were breastfed and exposed to NP through the milk. (3) To investigate the correlation between birth outcomes and prenatal exposure to NP. We analyzed thirty one pairs of maternal urines, placentas, first month’ breast milk by high-performance liquid chromatography coupling with fluorescence detector. The questionnaire included socio- demographics, lifestyle, delivery method, dietary and work history. Information about the birth outcomes were obtained from medical records. The daily intake of NP from breast milk was calculated using deterministic and probabilistic risk assessment methods. The geometric means and geometric standard deviation of NP levels in placenta, and breast milk in the first month were 31.2 (1.8) ng/g, 17.2 (1.6) ng/g, respectively. The medium of daily intake NP in breast milk was 1.33 μg/kg-bw/day in the first month. We found negative association between NP levels of placenta and birth height. And we observed negative correlation between maternal urine NP levels and birth weight. In this study, we could provide the NP exposure profile among Taiwan pregnant women and the daily intake of NP in Taiwan infants. Prenatal exposure to higher levels of NP may increase the risk of lower birth weight and shorter birth height.

Keywords: nonylphenol, mother, fetus, placenta, breast milk, urine

Procedia PDF Downloads 234
2168 An Intelligent Prediction Method for Annular Pressure Driven by Mechanism and Data

Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li, Shuo Zhu, Shiming Duan, Xuezhe Yao

Abstract:

Accurate calculation of wellbore pressure is of great significance to prevent wellbore risk during drilling. The traditional mechanism model needs a lot of iterative solving procedures in the calculation process, which reduces the calculation efficiency and is difficult to meet the demand of dynamic control of wellbore pressure. In recent years, many scholars have introduced artificial intelligence algorithms into wellbore pressure calculation, which significantly improves the calculation efficiency and accuracy of wellbore pressure. However, due to the ‘black box’ property of intelligent algorithm, the existing intelligent calculation model of wellbore pressure is difficult to play a role outside the scope of training data and overreacts to data noise, often resulting in abnormal calculation results. In this study, the multi-phase flow mechanism is embedded into the objective function of the neural network model as a constraint condition, and an intelligent prediction model of wellbore pressure under the constraint condition is established based on more than 400,000 sets of pressure measurement while drilling (MPD) data. The constraint of the multi-phase flow mechanism makes the prediction results of the neural network model more consistent with the distribution law of wellbore pressure, which overcomes the black-box attribute of the neural network model to some extent. The main performance is that the accuracy of the independent test data set is further improved, and the abnormal calculation values basically disappear. This method is a prediction method driven by MPD data and multi-phase flow mechanism, and it is the main way to predict wellbore pressure accurately and efficiently in the future.

Keywords: multiphase flow mechanism, pressure while drilling data, wellbore pressure, mechanism constraints, combined drive

Procedia PDF Downloads 174
2167 Development of Liquefaction-Induced Ground Damage Maps for the Wairau Plains, New Zealand

Authors: Omer Altaf, Liam Wotherspoon, Rolando Orense

Abstract:

The Wairau Plains are located in the north-east of the South Island of New Zealand in the region of Marlborough. The region is cut by many active crustal faults such as the Wairau, Awatere, and Clarence faults, which give rise to frequent seismic events. This paper presents the preliminary results of the overall project in which liquefaction-induced ground damage maps are developed in the Wairau Plains based on the Ministry of Business, Innovation and Employment NZ guidance. A suite of maps has been developed in relation to the level of details that was available to inform the liquefaction hazard mapping. Maps at the coarsest level of detail make use of regional geologic information, applying semi-quantitative criteria based on geological age, design peak ground accelerations and depth to the water table. The next level of detail incorporates higher resolution surface geomorphologic characteristics to better delineate potentially liquefiable and non-liquefiable deposits across the region. The most detailed assessment utilised CPT sounding data to develop ground damage response curves for areas across the region and provide a finer level of categorisation of liquefaction vulnerability. Linking these with design level earthquakes defined through NZGS guidelines will enable detailed classification to be carried out at CPT investigation locations, from very low through to high liquefaction vulnerability. To update classifications to these detailed levels, CPT investigations in geomorphic regions are grouped together to provide an indication of the representative performance of the soils in these areas making use of the geomorphic mapping outlined above.

Keywords: hazard, liquefaction, mapping, seismicity

Procedia PDF Downloads 139
2166 Students and Teachers Perceptions about Interactive Learning in Teaching Health Promotion Course: Implication for Nursing Education and Practice

Authors: Ahlam Alnatour

Abstract:

Background: To our knowledge, there is lack of studies that describe the experience of studying health promotion courses using an interactive approach, and compare students’ and teachers perceptions about this method of teaching. The purpose of this study is to provide a comparison between student and teacher experiences and perspectives in learning health promotion course using interactive learning. Design: A descriptive qualitative design was used to provide an in-depth description and understanding of students’ and teachers experiences and perceptions of learning health promotion courses using an interactive learning. Study Participants: About 14 fourteen students (seven male, seven female) and eight teachers at governmental university in northern Jordan participated in this study. Data Analysis: Conventional content analysis approach was used for participants’ scripts to gain an in-depth description for both students' and teacher’s experiences. Results: The main themes emerged from the data analysis describing the students’ and teachers perceptions of the interactive health promotion class: teachers’ and students positive experience in adopting interactive learning, advantages and benefits of interactive teaching, barriers to interactive teaching, and suggestions for improvement. Conclusion: Both teachers and students reflected positive attitudes toward interactive learning. Interactive learning helped to engage in learning process physically and cognitively. Interactive learning enhanced learning process, promote student attention, enhanced final performance, and satisfied teachers and students accordingly. Interactive learning approach should be adopted in teaching graduate and undergraduate courses using updated and contemporary strategies. Nursing scholars and educators should be motivated to integrate interactive learning in teaching different nursing courses.

Keywords: interactive learning, nursing, health promotion, qualitative study

Procedia PDF Downloads 250
2165 An Integrated Water Resources Management Approach to Evaluate Effects of Transportation Projects in Urbanized Territories

Authors: Berna Çalışkan

Abstract:

The integrated water management is a colloborative approach to planning that brings together institutions that influence all elements of the water cycle, waterways, watershed characteristics, wetlands, ponds, lakes, floodplain areas, stream channel structure. It encourages collaboration where it will be beneficial and links between water planning and other planning processes that contribute to improving sustainable urban development and liveability. Hydraulic considerations can influence the selection of a highway corridor and the alternate routes within the corridor. widening a roadway, replacing a culvert, or repairing a bridge. Because of this, the type and amount of data needed for planning studies can vary widely depending on such elements as environmental considerations, class of the proposed highway, state of land use development, and individual site conditions. The extraction of drainage networks provide helpful preliminary drainage data from the digital elevation model (DEM). A case study was carried out using the Arc Hydro extension within ArcGIS in the study area. It provides the means for processing and presenting spatially-referenced Stream Model. Study area’s flow routing, stream levels, segmentation, drainage point processing can be obtained using DEM as the 'Input surface raster'. These processes integrate the fields of hydrologic, engineering research, and environmental modeling in a multi-disciplinary program designed to provide decision makers with a science-based understanding, and innovative tools for, the development of interdisciplinary and multi-level approach. This research helps to manage transport project planning and construction phases to analyze the surficial water flow, high-level streams, wetland sites for development of transportation infrastructure planning, implementing, maintenance, monitoring and long-term evaluations to better face the challenges and solutions associated with effective management and enhancement to deal with Low, Medium, High levels of impact. Transport projects are frequently perceived as critical to the ‘success’ of major urban, metropolitan, regional and/or national development because of their potential to affect significant socio-economic and territorial change. In this context, sustaining and development of economic and social activities depend on having sufficient Water Resources Management. The results of our research provides a workflow to build a stream network how can classify suitability map according to stream levels. Transportation projects establish, develop, incorporate and deliver effectively by selecting best location for reducing construction maintenance costs, cost-effective solutions for drainage, landslide, flood control. According to model findings, field study should be done for filling gaps and checking for errors. In future researches, this study can be extended for determining and preventing possible damage of Sensitive Areas and Vulnerable Zones supported with field investigations.

Keywords: water resources management, hydro tool, water protection, transportation

Procedia PDF Downloads 56
2164 Issues in Organizational Assessment: The Case of Frustration Tolerance Measurement in Mexico

Authors: David Ruiz, Carlos Nava, Roberto Carbajal

Abstract:

The psychological profile has become one of the most important sources of information when it comes to individual selection and the hiring process in any organization. Psychological instruments are used to collect data about variables that are considered critically important for performance in work. However, because of conceptual chaos in organizational psychology, most of the information provided by psychological testing is not directly useful for Mexican human resources professionals to take hiring decisions. The aims of this paper are 1) to underline the lack of conceptual precision in theoretical testing foundations in Mexico and 2) presenting a reliability and validity analysis of a frustration tolerance instrument created as an alternative to a heuristically conduct individual assessment in organizations. First, a description of assessment conditions in Mexico is made. Second, an instrument and a theoretical framework is presented as an alternative to the assessment practices in the country. A total of 65 Psychology Iztacala Superior Studies Faculty students were assessed. Cronbach´s alpha coefficient was calculated and an exploratory factor analysis was carried out to prove the scale unidimensionality. Reliability analysis revealed good internal consistency of the scale (Cronbach’s α = 0.825). Factor analysis produced 4 factors for the scale. However, factor loadings and explained variation give proof to the scale unidimensionality. It is concluded that the instrument has good psychometric properties that will allow human resources professionals to collect useful data. Different possibilities to conduct psychological assessment are suggested for future development.

Keywords: psychological assessment, frustration tolerance, human resources, organizational psychology

Procedia PDF Downloads 309
2163 Multiscale Hub: An Open-Source Framework for Practical Atomistic-To-Continuum Coupling

Authors: Masoud Safdari, Jacob Fish

Abstract:

Despite vast amount of existing theoretical knowledge, the implementation of a universal multiscale modeling, analysis, and simulation software framework remains challenging. Existing multiscale software and solutions are often domain-specific, closed-source and mandate a high-level of experience and skills in both multiscale analysis and programming. Furthermore, tools currently existing for Atomistic-to-Continuum (AtC) multiscaling are developed with the assumptions such as accessibility of high-performance computing facilities to the users. These issues mentioned plus many other challenges have reduced the adoption of multiscale in academia and especially industry. In the current work, we introduce Multiscale Hub (MsHub), an effort towards making AtC more accessible through cloud services. As a joint effort between academia and industry, MsHub provides a universal web-enabled framework for practical multiscaling. Developed on top of universally acclaimed scientific programming language Python, the package currently provides an open-source, comprehensive, easy-to-use framework for AtC coupling. MsHub offers an easy to use interface to prominent molecular dynamics and multiphysics continuum mechanics packages such as LAMMPS and MFEM (a free, lightweight, scalable C++ library for finite element methods). In this work, we first report on the design philosophy of MsHub, challenges identified and issues faced regarding its implementation. MsHub takes the advantage of a comprehensive set of tools and algorithms developed for AtC that can be used for a variety of governing physics. We then briefly report key AtC algorithms implemented in MsHub. Finally, we conclude with a few examples illustrating the capabilities of the package and its future directions.

Keywords: atomistic, continuum, coupling, multiscale

Procedia PDF Downloads 177
2162 Synthesis and Characterization of Cassava Starch-Zinc Nanocomposite Film for Food Packaging Application

Authors: Adeshina Fadeyibi

Abstract:

Application of pure thermoplastic film in food packaging is greatly limited because of its poor service performance, often enhanced by the addition of organic or inorganic particles in the range of 1–100 nm. Thus, this study was conducted to develop cassava starch zinc-nanocomposite films for applications in food packaging. Three blending ratios of 1000 g cassava starch, 45–55 % (w/w) glycerol and 0–2 % (w/w) zinc nanoparticles were formulated, mixed and mechanically homogenized to form the nanocomposite. Thermoplastic were prepared, from a dispersed mixture of 24 g of the nanocomposite and 600 ml of distilled water, and heated to 90oC for 30 minutes. Plastic molds of 350 ×180 mm dimension and 8, 10 and 12 mm depths were used for film casting and drying at 60oC and 80 % RH for 24 hour. The average thicknesses of the dried films were found to be 15, 16 and 17 µm. The films were characterized based on their barrier, thermal, mechanical and structural properties. The results show that the oxygen and water vapor barrier properties increased with glycerol concentration and decreased with thickness; but the full width at half maximum (FWHM) and d- spacing increased with thickness. The higher degree of d- spacing obtained is a consequence of higher polymer intercalation and exfoliation. Also, only 2 % weight degradation was observed when the films were exposed to temperature between 30–60oC; indicating that they are thermally stable and can be used for packaging applications in the tropics. The mechanical properties of the film were higher than that of the pure thermoplastic but comparable with the LDPE films. The information on the characterized attributes and optimization of the cassava starch zinc-nanocomposite films justifies their alternative application to pure thermoplastic and conventional films for food packaging.

Keywords: synthesis, characterization, casaava Starch, nanocomposite film, packaging

Procedia PDF Downloads 119
2161 Thermolysin Entrapment in a Gold Nanoparticles/Polymer Composite: Construction of an Efficient Biosensor for Ochratoxin a Detection

Authors: Fatma Dridi, Mouna Marrakchi, Mohammed Gargouri, Alvaro Garcia Cruz, Sergei V. Dzyadevych, Francis Vocanson, Joëlle Saulnier, Nicole Jaffrezic-Renault, Florence Lagarde

Abstract:

An original method has been successfully developed for the immobilization of thermolysin onto gold interdigitated electrodes for the detection of ochratoxin A (OTA) in olive oil samples. A mix of polyvinyl alcohol (PVA), polyethylenimine (PEI) and gold nanoparticles (AuNPs) was used. Cross-linking sensors chip was made by using a saturated glutaraldehyde (GA) vapor atmosphere in order to render the two polymers water stable. Performance of AuNPs/ (PVA/PEI) modified electrode was compared to a traditional immobilized enzymatic method using bovine serum albumin (BSA). Atomic force microscopy (AFM) experiments were employed to provide a useful insight into the structure and morphology of the immobilized thermolysin composite membranes. The enzyme immobilization method influence the topography and the texture of the deposited layer. Biosensors optimization and analytical characteristics properties were studied. Under optimal conditions AuNPs/ (PVA/PEI) modified electrode showed a higher increment in sensitivity. A 700 enhancement factor could be achieved with a detection limit of 1 nM. The newly designed OTA biosensors showed a long-term stability and good reproducibility. The relevance of the method was evaluated using commercial doped olive oil samples. No pretreatment of the sample was needed for testing and no matrix effect was observed. Recovery values were close to 100% demonstrating the suitability of the proposed method for OTA screening in olive oil.

Keywords: thermolysin, A. ochratoxin , polyvinyl alcohol, polyethylenimine, gold nanoparticles, olive oil

Procedia PDF Downloads 590
2160 Fluid-Structure Interaction Analysis of a Vertical Axis Wind Turbine Blade Made with Natural Fiber Based Composite Material

Authors: Ivan D. Ortega, Juan D. Castro, Alberto Pertuz, Manuel Martinez

Abstract:

One of the problems considered when scientists talk about climate change is the necessity of utilizing renewable sources of energy, on this category there are many approaches to the problem, one of them is wind energy and wind turbines whose designs have frequently changed along many years trying to achieve a better overall performance on different conditions. From that situation, we get the two main types known today: Vertical and Horizontal axis wind turbines, which have acronyms VAWT and HAWT, respectively. This research aims to understand how well suited a composite material, which is still in development, made with natural origin fibers is for its implementation on vertical axis wind turbines blades under certain wind loads. The study consisted on acquiring the mechanical properties of the materials to be used which where bactris guineenis, also known as pama de lata in Colombia, and adhesive that acts as the matrix which had not been previously studied to the point required for this project. Then, a simplified 3D model of the airfoil was developed and tested under some preliminary loads using finite element analysis (FEA), these loads were acquired in the Colombian Chicamocha Canyon. Afterwards, a more realistic pressure profile was obtained using computational fluid dynamics which took into account the 3D shape of the complete blade and its rotation. Finally, the blade model was subjected to the wind loads using what is known as one way fluidstructure interaction (FSI) and its behavior analyzed to draw conclusions. The observed overall results were positive since the material behaved fairly as expected. Data suggests the material would be really useful in this kind of applications in small to medium size turbines if it is given more attention and time to develop.

Keywords: CFD, FEA, FSI, natural fiber, VAWT

Procedia PDF Downloads 226
2159 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface

Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari

Abstract:

With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.

Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis

Procedia PDF Downloads 416
2158 The Response of the Accumulated Biomass and the Efficiency of Water Use in Five Varieties of Durum Wheat Lines under Water Stress

Authors: Fellah Sihem

Abstract:

The optimal use of soil moisture by culture, is related to the leaf area index, which stood in the cycle and its modulation according to the prevailing stress intensity. For a given stock of water in the soil, cultivar adapted and saving water is one that is no luxury consumption during the preanthesis. It modulates the leaf area index to regulate sweating in the degree of its water supply. In plants water saving, avoidance of dehydration is related to the reduction of water loss by cuticular and stomatal pathways. Muchow and Sinclair reported that the test of relative water content (TRE) is considered the best indicator of leaf water status. The search for indicators of the ability of the plant to make good use of the water, under water stress is a prerequisite for progress in improving performance under water stress. This experiment aims to characterize a set of durum wheat varieties, tested jars and vegetation under different levels of water stress to the surface of the leaf, relative water content, cell integrity, the accumulated biomass and efficiency of water use. The experiment was conducted during the 2005/2006 academic year, at the Agricultural Research Station of the Field Crop Institute of Setif, under semi-controlled conditions. Five genotypes of durum wheat (Triticum durum Desf) were evaluated for their ability to tolerate moderate and severe water stress. The results showed that geno types respond differently to water stress. Dry matter accumulation and growth rate varied among geno types and were significantly reduced. At severe water stress biomass accumulated by Boussalam was the least affected.

Keywords: water stress, triticum durum, biomass, cell membrane integrity, relative water content

Procedia PDF Downloads 469
2157 Learning-by-Heart vs. Learning by Thinking: Fostering Thinking in Foreign Language Learning A Comparison of Two Approaches

Authors: Danijela Vranješ, Nataša Vukajlović

Abstract:

Turning to learner-centered teaching instead of the teacher-centered approach brought a whole new perspective into the process of teaching and learning and set a new goal for improving the educational process itself. However, recently a tremendous decline in students’ performance on various standardized tests can be observed, above all on the PISA-test. The learner-centeredness on its own is not enough anymore: the students’ ability to think is deteriorating. Especially in foreign language learning, one can encounter a lot of learning by heart: whether it is grammar or vocabulary, teachers often seem to judge the students’ success merely on how well they can recall a specific word, phrase, or grammar rule, but they rarely aim to foster their ability to think. Convinced that foreign language teaching can do both, this research aims to discover how two different approaches to teaching foreign language foster the students’ ability to think as well as to what degree they help students get to the state-determined level of foreign language at the end of the semester as defined in the Common European Framework. For this purpose, two different curricula were developed: one is a traditional, learner-centered foreign language curriculum that aims at teaching the four competences as defined in the Common European Framework and serves as a control variable, whereas the second one has been enriched with various thinking routines and aims at teaching the foreign language as a means to communicate ideas and thoughts rather than reducing it to the four competences. Moreover, two types of tests were created for each approach, each based on the content taught during the semester. One aims to test the students’ competences as defined in the CER, and the other aims to test the ability of students to draw on the knowledge gained and come to their own conclusions based on the content taught during the semester. As it is an ongoing study, the results are yet to be interpreted.

Keywords: common european framework of reference, foreign language learning, foreign language teaching, testing and assignment

Procedia PDF Downloads 107
2156 Facile Fabrication of TiO₂NT/Fe₂O₃@Ag₂CO₃ Nanocomposite and Its Highly Efficient Visible Light Photocatalytic and Antibacterial Activity

Authors: Amal A. Al-Kahlawy, Heba H. El-Maghrabi

Abstract:

Due to the increasing need to environment protection in real time need to energize new materials are under extensive investigations. Between others, TiO2 nanotubes (TNTs) nanocomposite with iron oxide and silver carbonate, are promising alternatives as high-efficiency visible light photocatalyst due to their unique properties and their superior charge transport properties. Our efforts in this domain aim the construction of novel nanocomposite of TiO2NT/Fe2O3@Ag2CO3. The structure, surface morphology, chemical composition and optical properties were characterized by X-ray diffraction (XRD), Raman, Fourier-transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), energy dispersive X-ray spectrometer (EDS), transmission electron microscopy (TEM), selected area electron diffraction (SAED) and UV–vis diffuse reflectance spectroscopy (DRS). XRD results confirm the interaction of TiO2-NT with iron oxide. This novel nanocomposite shows remarkably enhanced performance for phenol compounds photodegradation. The experimental data shows a promising photocatalytic activity. In particular, a maximum value of 450 mg/g was removed within 60 min at solar light irradiation with degradation efficiency of 99.5%. The high photocatalytic activity of the nanocomposite is found to be related to the increased adsorption toward chemical species, enhanced light absorption and efficient charge separation and transfer. Finally, the designed TiO2NT/Fe2O3@Ag2CO3 nanocomposite has a great degree of sustainability and could has a potential application for the industrial treatment of wastewater containing toxic organic materials.

Keywords: nanocomposite, photocatalyst, solar energy, titanium dioxide nanotubes

Procedia PDF Downloads 247
2155 Lipase-Catalyzed Synthesis of Novel Nutraceutical Structured Lipids in Non-Conventional Media

Authors: Selim Kermasha

Abstract:

A process for the synthesis of structured lipids (SLs) by the lipase-catalyzed interesterification of selected endogenous edible oils such as flaxseed oil (FO) and medium-chain triacylglyceols such as tricaprylin (TC) in non-conventional media (NCM), including organic solvent media (OSM) and solvent-free medium (SFM), was developed. The bioconversion yield of the medium-long-medium-type SLs (MLM-SLs were monitored as the responses with use of selected commercial lipases. In order to optimize the interesterification reaction and to establish a model system, a wide range of reaction parameters, including TC to FO molar ratio, reaction temperature, enzyme concentration, reaction time, agitation speed and initial water activity, were investigated to establish the a model system. The model system was monitored with the use of multiple response surface methodology (RSM) was used to obtain significant models for the responses and to optimize the interesterification reaction, on the basis of selected levels and variable fractional factorial design (FFD) with centre points. Based on the objective of each response, the appropriate level combination of the process parameters and the solutions that met the defined criteria were also provided by means of desirability function. The synthesized novel molecules were structurally characterized, using silver-ion reversed-phase high-performance liquid chromatography (RP-HPLC) atmospheric pressure chemical ionization-mass spectrophotometry (APCI-MS) analyses. The overall experimental findings confirmed the formation of dicaprylyl-linolenyl glycerol, dicaprylyl-oleyl glycerol and dicaprylyl-linoleyl glycerol resulted from the lipase-catalyzed interesterification of FO and TC.

Keywords: enzymatic interesterification, non-conventinal media, nutraceuticals, structured lipids

Procedia PDF Downloads 294
2154 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task

Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes

Abstract:

For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.

Keywords: Alzheimer's disease, keystroke logging, matching, writing process

Procedia PDF Downloads 366
2153 Sinhala Sign Language to Grammatically Correct Sentences using NLP

Authors: Anjalika Fernando, Banuka Athuraliya

Abstract:

This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired community

Keywords: Sinhala sign language, sign Language, NLP, LSTM, NMT

Procedia PDF Downloads 104
2152 Knowledge, Attitude and Practice of Pregnant Women toward Antenatal Care at Public Hospitals in Sana'a City-Yemen

Authors: Abdulfatah Al-Jaradi, Marzoq Ali Odhah, Abdulnasser A. Haza’a

Abstract:

Background: Antenatal care can be defined as the care provided by skilled healthcare professionals to pregnant women and adolescent girls to ensure the best health conditions for both mother and baby during pregnancy. The components of ANC include risk identification; prevention and management of pregnancy-related or concurrent diseases; and health education and health promotion. The aim of this study: to assess the knowledge, attitude, and practice of pregnant women regarding antenatal care. Methodology: A descriptive KAP study was conducting in public hospitals in Sana'a City-Yemen. The study population was included all pregnant women that intended to the prenatal department and clinical outpatient department, the final sample size was 371 pregnant women, a self-administered questionnaire was used to collect the data, statistical package for social sciences SPSS was used to data analysis. The results: Most (79%) of pregnant women were had correct answers in total knowledge regarding antenatal care, and about two-thirds (67%) of pregnant women were had performance practice regarding antenatal care and two-third (68%) of pregnant women were had a positive attitude. Conclusions & Recommendations: We concluded that a significant association between overall knowledge and practice level toward antenatal care and demographic characteristics of pregnant women, women (residence place, level of education, did your husband support you in attending antenatal care and place of delivery of the last baby), at (P-value ≤ 0.05). We recommended more education and training courses, lecturers and education sessions in clinical facilitators focused ANC, which relies on evidence-based interventions provided to women during pregnancy by skilled healthcare providers such as midwives, doctors, and nurses.

Keywords: antenatal care, knowledge, practice, attitude, pregnant women

Procedia PDF Downloads 189
2151 A Finite Element Study of Laminitis in Horses

Authors: Naeim Akbari Shahkhosravi, Reza Kakavand, Helen M. S. Davies, Amin Komeili

Abstract:

Equine locomotion and performance are significantly affected by hoof health. One of the most critical diseases of the hoof is laminitis, which can lead to horse lameness in a severe condition. This disease exhibits the mechanical properties degradation of the laminar junction tissue within the hoof. Therefore, it is essential to investigate the biomechanics of the hoof, focusing specifically on excessive and cumulatively accumulated stresses within the laminar junction tissue. For this aim, the current study generated a novel equine hoof Finite Element (FE) model under dynamic physiological loading conditions and employing a hyperelastic material model. Associated tissues of the equine hoof were segmented from computed tomography scans of an equine forelimb, including the navicular bone, third phalanx, sole, frog, laminar junction, digital cushion, and medial- dorsal- lateral wall areas. The inner tissues were connected based on the hoof anatomy, and the hoof was under a dynamic loading over cyclic strides at the trot. The strain distribution on the hoof wall of the model was compared with the published in vivo strain measurements to validate the model. Then the validated model was used to study the development of laminitis. The ultimate stress tolerated by the laminar junction before rupture was considered as a stress threshold. The tissue damage was simulated through iterative reduction of the tissue’s mechanical properties in the presence of excessive maximum principal stresses. The findings of this investigation revealed how damage initiates from the medial and lateral sides of the tissue and propagates through the hoof dorsal area.

Keywords: horse hoof, laminitis, finite element model, continuous damage

Procedia PDF Downloads 182
2150 Mechanical Activation of a Waste Material Used as Cement Replacement in Soft Soil Stabilisation

Authors: Hassnen M. Jafer, W. Atherton, F. Ruddock, E. Loffil

Abstract:

Waste materials or sometimes called by-product materials have been increasingly used as construction material to reduce the usage of cement in different construction projects. In the field of soil stabilisation, waste materials such as pulverised fuel ash (PFA), biomass fly ash (BFA), sewage sludge ash (SSA), etc., have been used since 1960s in last century. In this study, a particular type of a waste material (WM) was used in soft soil stabilisation as a cement replacement, as well as, the effect of mechanical activation, using grinding, on the performance of this WM was also investigated. The WM used in this study is a by-product resulted from the incineration processes between 1000 and 1200oc in domestic power generation plant using a fluidized bed combustion system. The stabilised soil in this study was an intermediate plasticity silty clayey soil with medium organic matter content. The experimental works were conducted first to find the optimum content of WM by carrying out Atterberg limits and unconfined compressive strength (UCS) tests on soil samples contained (0, 3, 6, 9, 12, and 15%) of WM by the dry weight of soil. The UCS test was carried out on specimens provided to different curing periods (zero, 7, 14, and 28 days). Moreover, the optimum percentage of the WM was subject to different periods of grinding (10, 20, 30, 40mins) using mortar and pestle grinder to find the effect of grinding and its optimum time by conducting UCS test. The results indicated that the WM used in this study improved the physical properties of the soft soil where the index of plasticity (IP) was decreased significantly from 21 to 13.10 with 15% of WM. Meanwhile, the results of UCS test indicated that 12% of WM was the optimum and this percentage developed the UCS value from 202kPa to 700kPa for 28 days cured samples. Along with the time of grinding, the results revealed that 10 minutes of grinding was the best for mechanical activation for the WM used in this study.

Keywords: soft soil stabilisation, waste materials, grinding, and unconfined compressive strength

Procedia PDF Downloads 280
2149 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification

Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos

Abstract:

Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.

Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology

Procedia PDF Downloads 149
2148 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 388
2147 Impact of Nanoparticles in Enhancement of Thermal Conductivity of Phase Change Materials in Thermal Energy Storage and Cooling of Concentrated Photovoltaics

Authors: Ismaila H. Zarma, Mahmoud Ahmed, Shinichi Ookawara, Hamdi Abo-Ali

Abstract:

Phase change materials (PCM) are an ideal thermal storage medium. They are characterized by a high latent heat, which allows them to store large amounts of energy when the material transitions into different physical states. Concentrated photovoltaic (CPV) systems are widely recognized as the most efficient form of Photovoltaic (PV) for thermal energy which can be stored in Phase Change Materials (PCM). However, PCMs often have a low thermal conductivity which leads to a slow transient response. This makes it difficult to quickly store and access the energy stored within the PCM based systems, so there is need to improve transient responses and increase the thermal conductivity. The present study aims to investigate and analyze the melting and solidification process of phase change materials (PCMs) enhanced by nanoparticle contained in a container. Heat flux from concentrated photovoltaic is applied in an attempt to analyze the thermal performance and the impact of nanoparticles. The work will be realized by using a two dimensional model which take into account the phase change phenomena based on the principle of enthalpy method. Numerical simulations have been performed to investigate heat and flow characteristics by using governing equations, to ascertain the impacts of the nanoparticle loading. The Rayleigh number, sub-cooling as well as the unsteady evolution of the melting front and the velocity and temperature fields were also observed. The predicted results exhibited a good agreement, showing thermal enhancement due to present of nanoparticle which leads to decreasing the melting time.

Keywords: thermal energy storage, phase-change material, nanoparticle, concentrated photovoltaic

Procedia PDF Downloads 203
2146 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette

Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida

Abstract:

Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.

Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry

Procedia PDF Downloads 49
2145 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations

Authors: Deepak Singh, Rail Kuliev

Abstract:

The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.

Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization

Procedia PDF Downloads 70
2144 Estimation of Energy Losses of Photovoltaic Systems in France Using Real Monitoring Data

Authors: Mohamed Amhal, Jose Sayritupac

Abstract:

Photovoltaic (PV) systems have risen as one of the modern renewable energy sources that are used in wide ranges to produce electricity and deliver it to the electrical grid. In parallel, monitoring systems have been deployed as a key element to track the energy production and to forecast the total production for the next days. The reliability of the PV energy production has become a crucial point in the analysis of PV systems. A deeper understanding of each phenomenon that causes a gain or a loss of energy is needed to better design, operate and maintain the PV systems. This work analyzes the current losses distribution in PV systems starting from the available solar energy, going through the DC side and AC side, to the delivery point. Most of the phenomena linked to energy losses and gains are considered and modeled, based on real time monitoring data and datasheets of the PV system components. An analysis of the order of magnitude of each loss is compared to the current literature and commercial software. To date, the analysis of PV systems performance based on a breakdown structure of energy losses and gains is not covered enough in the literature, except in some software where the concept is very common. The cutting-edge of the current analysis is the implementation of software tools for energy losses estimation in PV systems based on several energy losses definitions and estimation technics. The developed tools have been validated and tested on some PV plants in France, which are operating for years. Among the major findings of the current study: First, PV plants in France show very low rates of soiling and aging. Second, the distribution of other losses is comparable to the literature. Third, all losses reported are correlated to operational and environmental conditions. For future work, an extended analysis on further PV plants in France and abroad will be performed.

Keywords: energy gains, energy losses, losses distribution, monitoring, photovoltaic, photovoltaic systems

Procedia PDF Downloads 176