Search results for: ground source heat pumps
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9029

Search results for: ground source heat pumps

419 The Superior Performance of Investment Bank-Affiliated Mutual Funds

Authors: Michelo Obrey

Abstract:

Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.

Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank

Procedia PDF Downloads 160
418 Assessment of a Rapid Detection Sensor of Faecal Pollution in Freshwater

Authors: Ciprian Briciu-Burghina, Brendan Heery, Dermot Brabazon, Fiona Regan

Abstract:

Good quality bathing water is a highly desirable natural resource which can provide major economic, social, and environmental benefits. Both in Ireland and Europe, such water bodies are managed under the European Directive for the management of bathing water quality (BWD). The BWD aims mainly: (i) to improve health protection for bathers by introducing stricter standards for faecal pollution assessment (E. coli, enterococci), (ii) to establish a more pro-active approach to the assessment of possible pollution risks and the management of bathing waters, and (iii) to increase public involvement and dissemination of information to the general public. Standard methods for E. coli and enterococci quantification rely on cultivation of the target organism which requires long incubation periods (from 18h to a few days). This is not ideal when immediate action is required for risk mitigation. Municipalities that oversee the bathing water quality and deploy appropriate signage have to wait for laboratory results. During this time, bathers can be exposed to pollution events and health risks. Although forecasting tools exist, they are site specific and as consequence extensive historical data is required to be effective. Another approach for early detection of faecal pollution is the use of marker enzymes. β-glucuronidase (GUS) is a widely accepted biomarker for E. coli detection in microbiological water quality control. GUS assay is particularly attractive as they are rapid, less than 4 h, easy to perform and they do not require specialised training. A method for on-site detection of GUS from environmental samples in less than 75 min was previously demonstrated. In this study, the capability of ColiSense as an early warning system for faecal pollution in freshwater is assessed. The system successfully detected GUS activity in all of the 45 freshwater samples tested. GUS activity was found to correlate linearly with E. coli (r2=0.53, N=45, p < 0.001) and enterococci (r2=0.66, N=45, p < 0.001) Although GUS is a marker for E. coli, a better correlation was obtained for enterococci. For this study water samples were collected from 5 rivers in the Dublin area over 1 month. This suggests a high diversity of pollution sources (agricultural, industrial, etc) as well as point and diffuse pollution sources were captured in the sample size. Such variety in the source of E. coli can account for different GUS activities/culturable cell and different ratios of viable but not culturable to viable culturable bacteria. A previously developed protocol for the recovery and detection of E. coli was coupled with a miniaturised fluorometer (ColiSense) and the system was assessed for the rapid detection FIB in freshwater samples. Further work will be carried out to evaluate the system’s performance on seawater samples.

Keywords: faecal pollution, β-glucuronidase (GUS), bathing water, E. coli

Procedia PDF Downloads 253
417 Food Composition Tables Used as an Instrument to Estimate the Nutrient Ingest in Ecuador

Authors: Ortiz M. Rocío, Rocha G. Karina, Domenech A. Gloria

Abstract:

There are several tools to assess the nutritional status of the population. A main instrument commonly used to build those tools is the food composition tables (FCT). Despite the importance of FCT, there are many error sources and variability factors that can be presented on building those tables and can lead to an under or over estimation of ingest of nutrients of a population. This work identified different food composition tables used as an instrument to estimate the nutrient ingest in Ecuador.The collection of data for choosing FCT was made through key informants –self completed questionnaires-, supplemented with institutional web research. A questionnaire with general variables (origin, year of edition, etc) and methodological variables (method of elaboration, information of the table, etc) was passed to the identified FCT. Those variables were defined based on an extensive literature review. A descriptive analysis of content was performed. Ten printed tables and three databases were reported which were all indistinctly treated as food composition tables. We managed to get information from 69% of the references. Several informants referred to printed documents that were not accessible. In addition, searching the internet was not successful. Of the 9 final tables, n=8 are from Latin America, and, n= 5 of these were constructed by indirect method (collection of already published data) having as a main source of information a database from the United States department of agriculture USDA. One FCT was constructed by using direct method (bromatological analysis) and has its origin in Ecuador. The 100% of the tables made a clear distinction of the food and its method of cooking, 88% of FCT expressed values of nutrients per 100g of edible portion, 77% gave precise additional information about the use of the table, and 55% presented all the macro and micro nutrients on a detailed way. The more complete FCT were: INCAP (Central America), Composition of foods (Mexico). The more referred table was: Ecuadorian food composition table of 1965 (70%). The indirect method was used for most tables within this study. However, this method has the disadvantage that it generates less reliable food composition tables because foods show variations in composition. Therefore, a database cannot accurately predict the composition of any isolated sample of a food product.In conclusion, analyzing the pros and cons, and, despite being a FCT elaborated by using an indirect method, it is considered appropriate to work with the FCT of INCAP Central America, given the proximity to our country and a food items list that is very similar to ours. Also, it is imperative to have as a reference the table of composition for Ecuadorian food, which, although is not updated, was constructed using the direct method with Ecuadorian foods. Hence, both tables will be used to elaborate a questionnaire with the purpose of assessing the food consumption of the Ecuadorian population. In case of having disparate values, we will proceed by taking just the INCAP values because this is an updated table.

Keywords: Ecuadorian food composition tables, FCT elaborated by direct method, ingest of nutrients of Ecuadorians, Latin America food composition tables

Procedia PDF Downloads 403
416 Environmental Threats and Great Barrier Reef: A Vulnerability Assessment of World’s Best Tropical Marine Ecosystems

Authors: Ravi Kant Anand, Nikkey Keshri

Abstract:

The Great Barrier Reef of Australia is known for its beautiful landscapes and seascapes with ecological importance. This site was selected as a World Heritage site in 1981 and popularized internationally for tourism, recreational activities and fishing. But the major environmental hazards such as climate change, pollution, overfishing and shipping are making worst the site of marine ecosystem. Climate change is directly hitting on Great Barrier Reef through increasing level of sea, acidification of ocean, increasing in temperature, uneven precipitation, changes in the El Nino and increasing level of cyclones and storms. Apart from that pollution is second biggest factor which vanishing the coral reef ecosystem. Pollution including over increasement of pesticides and chemicals, eutrophication, pollution through mining, sediment runoff, loss of coastal wetland and oil spills. Coral bleaching is the biggest problem because of the environmental threatening agents. Acidification of ocean water reduced the formation of calcium carbonate skeleton. The floral ecosystem (including sea grasses and mangroves) of ocean water is the key source of food for fishes and other faunal organisms but the powerful waves, extreme temperature, destructive storms and river run- off causing the threat for them. If one natural system is under threat, it means the whole marine food web is affected from algae to whale. Poisoning of marine water through different polluting agents have been affecting the production of corals, breeding of fishes, weakening of marine health and increased in death of fishes and corals. In lieu of World Heritage site, tourism sector is directly affected and causing increasement in unemployment. Fishing sector also affected. Fluctuation in the temperature of ocean water affects the production of corals because it needs desolate place, proper sunlight and temperature up to 21 degree centigrade. But storms, El Nino, rise in temperature and sea level are induced for continuous reduction of the coral production. If we do not restrict the environmental problems of Great Barrier Reef than the best known ecological beauty with coral reefs, pelagic environments, algal meadows, coasts and estuaries, mangroves forests and sea grasses, fish species, coral gardens and the one of the best tourist spots will lost in upcoming years. My research will focus on the different environmental threats, its socio-economic impacts and different conservative measures.

Keywords: climate change, overfishing, acidification, eutrophication

Procedia PDF Downloads 353
415 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web

Authors: Aayushi Somani, Siba P. Samal

Abstract:

Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.

Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR

Procedia PDF Downloads 143
414 Strategies of Translation: Unlocking the Secret of 'Locksley Hall'

Authors: Raja Lahiani

Abstract:

'Locksley Hall' is a poem that Lord Alfred Tennyson (1809-1892) published in 1842. It is believed to be his first attempt to face as a poet some of the most painful of his experiences, as it is a study of his rising out of sickness into health, conquering his selfish sorrow by faith and hope. So far, in Victorian scholarship as in modern criticism, 'Locksley Hall' has been studied and approached as a canonical Victorian English poem. The aim of this project is to prove that some strategies of translation were used in this poem in such a way as to guarantee its assimilation into the English canon and hence efface to a large extent its Arabic roots. In its relationship with its source text, 'Locksley Hall' is at the same time mimetic and imitative. As part of the terminology used in translation studies, ‘imitation’ means almost the exact opposite of what it means in ordinary English. By adopting an imitative procedure, a translator would do something totally different from the original author, wandering far and freely from the words and sense of the original text. An imitation is thus aimed at an audience which wants the work of the particular translator rather than the work of the original poet. Hallam Tennyson, the poet’s biographer, asserts that 'Locksley Hall' is a simple invention of place, incidents, and people, though he notes that he remembers the poet claiming that Sir William Jones’ prose translation of the Mu‘allaqat (pre-Islamic poems) gave him the idea of the poem. A comparative work would prove that 'Locksley Hall' mirrors a great deal of Tennyson’s biography and hence is not a simple invention of details as asserted by his biographer. It would be challenging to prove that 'Locksley Hall' shares so many details with the Mu‘allaqat, as declared by Tennyson himself, that it needs to be studied as an imitation of the Mu‘allaqat of Imru’ al-Qays and ‘Antara in addition to its being a poem in its own right. Thus, the main aim of this work is to unveil the imitative and mimetic strategies used by Tennyson in his composition of 'Locksley Hall.' It is equally important that this project researches the acculturating assimilative tools used by the poet to root his poem in its Victorian English literary, cultural and spatiotemporal settings. This work adopts a comparative methodology. Comparison is done at different levels. The poem will be contextualized in its Victorian English literary framework. Alien details related to structure, socio-spatial setting, imagery and sound effects shall be compared to Arabic poems from the Mu‘allaqat collection. This would determine whether the poem is a translation, an adaption, an imitation or a genuine work. The ultimate objective of the project is to unveil in this canonical poem a new dimension that has for long been either marginalized or ignored. By proving that 'Locksley Hall' is an imitation of classical Arabic poetry, the project aspires to consolidate its literary value and open up new gates of accessing it.

Keywords: comparative literature, imitation, Locksley Hall, Lord Alfred Tennyson, translation, Victorian poetry

Procedia PDF Downloads 178
413 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System

Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim

Abstract:

General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.

Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms

Procedia PDF Downloads 364
412 Numerical Simulation of the Production of Ceramic Pigments Using Microwave Radiation: An Energy Efficiency Study Towards the Decarbonization of the Pigment Sector

Authors: Pedro A. V. Ramos, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Global warming mitigation is one of the main challenges of this century, having the net balance of greenhouse gas (GHG) emissions to be null or negative in 2050. Industry electrification is one of the main paths to achieving carbon neutrality within the goals of the Paris Agreement. Microwave heating is becoming a popular industrial heating mechanism due to the absence of direct GHG emissions, but also the rapid, volumetric, and efficient heating. In the present study, a mathematical model is used to simulate the production using microwave heating of two ceramic pigments, at high temperatures (above 1200 Celsius degrees). The two pigments studied were the yellow (Pr, Zr)SiO₂ and the brown (Ti, Sb, Cr)O₂. The chemical conversion of reactants into products was included in the model by using the kinetic triplet obtained with the model-fitting method and experimental data present in the Literature. The coupling between the electromagnetic, thermal, and chemical interfaces was also included. The simulations were computed in COMSOL Multiphysics. The geometry includes a moving plunger to allow for the cavity impedance matching and thus maximize the electromagnetic efficiency. To accomplish this goal, a MATLAB controller was developed to automatically search the position of the moving plunger that guarantees the maximum efficiency. The power is automatically and permanently adjusted during the transient simulation to impose stationary regime and total conversion, the two requisites of every converged solution. Both 2D and 3D geometries were used and a parametric study regarding the axial bed velocity and the heat transfer coefficient at the boundaries was performed. Moreover, a Verification and Validation study was carried out by comparing the conversion profiles obtained numerically with the experimental data available in the Literature; the numerical uncertainty was also estimated to attest to the result's reliability. The results show that the model-fitting method employed in this work is a suitable tool to predict the chemical conversion of reactants into the pigment, showing excellent agreement between the numerical results and the experimental data. Moreover, it was demonstrated that higher velocities lead to higher thermal efficiencies and thus lower energy consumption during the process. This work concludes that the electromagnetic heating of materials having high loss tangent and low thermal conductivity, like ceramic materials, maybe a challenge due to the presence of hot spots, which may jeopardize the product quality or even the experimental apparatus. The MATLAB controller increased the electromagnetic efficiency by 25% and global efficiency of 54% was obtained for the titanate brown pigment. This work shows that electromagnetic heating will be a key technology in the decarbonization of the ceramic sector as reductions up to 98% in the specific GHG emissions were obtained when compared to the conventional process. Furthermore, numerical simulations appear as a suitable technique to be used in the design and optimization of microwave applicators, showing high agreement with experimental data.

Keywords: automatic impedance matching, ceramic pigments, efficiency maximization, high-temperature microwave heating, input power control, numerical simulation

Procedia PDF Downloads 116
411 Risk and Coping: Understanding Community Responses to Calls for Disaster Evacuation in Central Philippines

Authors: Soledad Natalia M. Dalisay, Mylene De Guzman

Abstract:

In archipelagic countries like the Philippines, many communities thrive along coastal areas. The sea is the community members’ main source of livelihood and the site of many cultural activities. For these communities, the sea is their life and livelihood. Nevertheless, the sea also poses a hazard during the rainy season when typhoons frequent their communities. Coastal communities often encounter threats from storm surges and flooding that are common when there are typhoons. During such periods, disaster evacuation programs are implemented. However, in many instances, evacuation has been the bane of local government officials implementing such programs in their communities as resistance from community members is often encountered. Such resistance is often viewed by program implementers as due to the fact that people were hard headed and ignorant of the potential impacts of living in hazard prone areas. This paper argues that it is not for these reasons that people refused to evacuate. Drawing from data collected from fieldwork done in three sites in Central Philippines affected by super typhoon Haiyan, this study aimed to provide a contextualized understanding of peoples’ refusal to heed disaster evacuation warnings. This study utilized the multi-sited ethnography approach with in-depth episodic interviews, focus group discussions, participatory risk mapping and key informant interviews in gathering data on peoples’ experiences and insights specifically on evacuation during typhoon Haiyan. This study showed that people have priorities and considerations vital in their social lives that they are protecting in their refusal to leave their homes for pre-emptive evacuation. It is not that they are not aware of the risks when the face the hazard. It is more that they had faith in the local knowledge and strategies that they have developed since the time of their ancestors as a result of living and engaging with hazards in their areas for as long as they could remember. The study also revealed that risk in encounters with hazards was gendered. Furthermore, previous engagement with local government officials and the manner in which the pre-emptive evacuation programs were implemented had cast doubt on the value of such programs in saving their lives. Life in the designated evacuation areas can be as dangerous if not more compared with living in their coastal homes. There seems to be the impression that in the evacuation program of the government, people were being moved from hazard zones to death zones. Thus, this paper ends with several recommendations that may contribute to building more responsive evacuation programs that aim to build people’s resilience while taking into consideration the local moral world in communities in identified hazard zones.

Keywords: coastal communities, disaster evacuation, disaster risk perception, social and cultural responses to hazards

Procedia PDF Downloads 317
410 Agricultural Education and Research in India: Challenges and Way Forward

Authors: Kiran Kumar Gellaboina, Padmaja Kaja

Abstract:

Agricultural Education and Research in India needs a transformation to serve the needs of the farmers and that of the nation. The fact that Agriculture and allied activities act as main source of livelihood for more than 70% population of rural India reinforces its importance in administrative and policy arena. As per Census 2011 of India it provides employment to approximately 56.6 % of labour. India has achieved significant growth in agriculture, milk, fish, oilseeds and fruits and vegetables owing to green, white, blue and yellow revolutions which have brought prosperity to farmers. Many factors are responsible for these achievement viz conducive government policies, receptivity of the farmers and also establishment of higher agricultural education institutions. The new breed of skilled human resources were instrumental in generating new technologies, and in its assessment, refinement and finally its dissemination to the farming community through extension methods. In order to sustain, diversify and realize the potential of agriculture sectors, it is necessary to develop skilled human resources. Agricultural human resource development is a continuous process undertaken by agricultural universities. The Department of Agricultural Research and Education (DARE) coordinates and promotes agricultural research & education in India. In India, agricultural universities were established on ‘land grant’ pattern of USA which helped incorporation of a number of diverse subjects in the courses as also provision of hands-on practical exposure to the student. The State Agricultural Universities (SAUs) established through the legislative acts of the respective states and with major financial support from them leading to administrative and policy controls. It has been observed that pace and quality of technology generation and human resource development in many of the SAUs has gone down. The reason for this slackening are inadequate state funding, reduced faculty strength, inadequate faculty development programmes, lack of modern infrastructure for education and research etc. Establishment of new state agricultural universities and new faculties/colleges without providing necessary financial and faculty support has aggrieved the problem. The present work highlights some of the key issues affecting agricultural education and research in India and the impact it would have on farm productivity and sustainability. Secondary data pertaining to budgetary spend on agricultural education and research will be analyzed. This paper will study the trends in public spending on agricultural education and research and the per capita income of farmers in India. This paper tries to suggest that agricultural education and research has a key role in equipping the human resources for enhanced agricultural productivity and sustainable use of natural resources. Further, a total re-orientation of agricultural education with emphasis on other agricultural related social sciences is needed for effective agricultural policy research.

Keywords: agriculture, challenges, education, research

Procedia PDF Downloads 203
409 Petrology of the Post-Collisional Dolerites, Basalts from the Javakheti Highland, South Georgia

Authors: Bezhan Tutberidze

Abstract:

The Neogene-Quaternary volcanic rocks of the Javakheti Highland are products of post-collisional continental magmatism and are related to divergent and convergent margins of Eurasian-Afroarabian lithospheric plates. The studied area constitutes an integral part of the volcanic province of Central South Georgia. Three cycles of volcanic activity are identified here: 1. Late Miocene-Early Pliocene, 2. Late Pliocene-Early /Middle/ Pleistocene and 3. Late Pleistocene. An intense basic dolerite magmatic activity occurred within the time span of the Late Pliocene and lasted until at least Late /Middle/ Pleistocene. The age of the volcanogenic and volcanogenic-sedimentary formation was dated by geomorphological, paleomagnetic, paleontological and geochronological methods /1.7-1.9 Ma/. The volcanic area of the Javakheti Highland contains multiple dolerite Plateaus: Akhalkalaki, Gomarethi, Dmanisi, and Tsalka. Petrographic observations of these doleritic rocks reveal fairly constant mineralogical composition: olivine / Fo₈₇.₆₋₈₂.₇ /, plagioclase / Ab₂₂.₈ An₇₅.₉ Or₁.₃; Ab₄₅.₀₋₃₂.₃ An₅₂.₉₋₆₂.₃ Or₂.₁₋₅.₄/. The pyroxene is an augite and may exhibit a visible zoning: / Wo 39.7-43.1 En 43.5-45.2 Fs 16.8-11.7/. Opaque minerals /magnetite, titanomagnetite/ is abundant as inclusions within olivine and pyroxene crystals. The texture of dolerites exhibits intergranular, holocrystalline to ophitic to sub ophitic granular. Dolerites are most common vesicular rocks. Vesicles range in shape from spherical to elongated and in size from 0.5 mm to than 1.5-2 cm and makeup about 20-50 % of the volume. The dolerites have been subjected to considerable alteration. The secondary minerals in the geothermal field are: zeolite, calcite, chlorite, aragonite, clay-like mineral /dominated by smectites/ and iddingsite –like mineral; rare quartz and pumpellyite are present. These vesicles are filled by secondary minerals. In the chemistry, dolerites are the calc-alkalic transition to sub-alkaline with a predominance of Na₂O over K₂O. Chemical analyses indicate that dolerites of all plateaus of the Javakheti Highland have similar geochemical compositions, signifying that they were formed from the same magmatic source by crystallization of olivine basalis magma which less differentiated / ⁸⁷Sr \ ⁸⁶Sr 0.703920-0704195/. There is one argument, which is less convincing, according to which the dolerites/basalts of the Javakheti Highland are considered to be an activity of a mantle plume. Unfortunately, there does not exist reliable evidence to prove this. The petrochemical peculiarities and eruption nature of the dolerites of the Javakheti Plateau point against their plume origin. Nevertheless, it is not excluded that they influence the formation of dolerite producing primary basaltic magma.

Keywords: calc-alkalic, dolerite, Georgia, Javakheti Highland

Procedia PDF Downloads 240
408 In Vitro Fermentation Of Rich In B-glucan Pleurotus Eryngii Mushroom: Impact On Faecal Bacterial Populations And Intestinal Barrier In Autistic Children

Authors: Georgia Saxami, Evangelia N. Kerezoudi, Evdokia K. Mitsou, Marigoula Vlassopoulou, Georgios Zervakis, Adamantini Kyriacou

Abstract:

Autism Spectrum Disorder (ASD) is a complex group of developmental disorders of the brain, characterized by social and communication dysfunctions, stereotypes and repetitive behaviors. The potential interaction between gut microbiota (GM) and autism has not been fully elucidated. Children with autism often suffer gastrointestinal dysfunctions, while alterations or dysbiosis of GM have also been observed. Treatment with dietary components has been postulated to regulate GM and improve gastrointestinal symptoms, but there is a lack of evidence for such approaches in autism, especially for prebiotics. This study assessed the effects of Pleurotus eryngii mushroom (candidate prebiotic) and inulin (known prebiotic compound) on gut microbial composition, using faecal samples from autistic children in an in vitro batch culture fermentation system. Selected members of GM were enumerated at baseline (0 h) and after 24 h fermentation by quantitative PCR. After 24 h fermentation, inulin and P. eryngii mushroom induced a significant increase in total bacteria and Faecalibacterium prausnitzii compared to the negative control (gut microbiota of each autistic donor with no carbohydrate source), whereas both treatments induced a significant increase in levels of total bacteria, Bifidobacterium spp. and Prevotella spp. compared to baseline (t=0h) (p for all <0.05). Furthermore, this study evaluated the impact of fermentation supernatants (FSs), derived from P. eryngii mushroom or inulin, on the expression levels of tight junctions’ genes (zonulin-1, occludin and claudin-1) in Caco-2 cells stimulated by bacterial lipopolysaccharides (LPS). Pre-incubation of Caco-2 cells with FS from P. eryngii mushroom led to a significant increase in the expression levels of zonulin-1, occludin and claudin-1 genes compared to the untreated cells, the cells that were subjected to LPS and the cells that were challenged with FS from negative control (p for all <0.05). In addition, incubation with FS from P. eryngii mushroom led to the highest mean expression values for zonulin-1 and claudin-1 genes, which differed significantly compared to inulin (p for all <0.05). Overall, this research highlighted the beneficial in vitro effects of P. eryngii mushroom on the composition of GM of autistic children after 24 h of fermentation. Also, our data highlighted the potential preventive effect of P. eryngii FSs against dysregulation of the intestinal barrier, through upregulation of tight junctions’ genes associated with the integrity and function of the intestinal barrier. This research has been financed by "Supporting Researchers with Emphasis on Young Researchers - Round B", Operational Program "Human Resource Development, Education and Lifelong Learning."

Keywords: gut microbiota, intestinal barrier, autism spectrum disorders, Pleurotus Eryngii

Procedia PDF Downloads 141
407 Investigation of Municipal Solid Waste Incineration Filter Cake as Minor Additional Constituent in Cement Production

Authors: Veronica Caprai, Katrin Schollbach, Miruna V. A. Florea, H. J. H. Brouwers

Abstract:

Nowadays MSWI (Municipal Solid Waste Incineration) bottom ash (BA) produced by Waste-to-Energy (WtE) plants represents the majority of the solid residues derived from MSW incineration. Once processed, the BA is often landfilled resulting in possible environmental problems, additional costs for the plant and increasing occupation of public land. In order to limit this phenomenon, European countries such as the Netherlands aid the utilization of MSWI BA in the construction field, by providing standards about the leaching of contaminants into the environment (Dutch Soil Quality Decree). Commonly, BA has a particle size below 32 mm and a heterogeneous chemical composition, depending on its source. By washing coarser BA, an MSWI sludge is obtained. It is characterized by a high content of heavy metals, chlorides, and sulfates as well as a reduced particle size (below 0.25 mm). To lower its environmental impact, MSWI sludge is filtered or centrifuged for removing easily soluble contaminants, such as chlorides. However, the presence of heavy metals is not easily reduced, compromising its possible application. For lowering the leaching of those contaminants, the use of MSWI residues in combination with cement represents a precious option, due to the known retention of those ions into the hydrated cement matrix. Among the applications, the European standard for common cement EN 197-1:1992 allows the incorporation of up to 5% by mass of a minor additional constituent (MAC), such as fly ash or blast furnace slag but also an unspecified filler into cement. To the best of the author's knowledge, although it is widely available, it has the appropriate particle size and a chemical composition similar to cement, FC has not been investigated as possible MAC in cement production. Therefore, this paper will address the suitability of MSWI FC as MAC for CEM I 52.5 R, within a 5% maximum replacement by mass. After physical and chemical characterization of the raw materials, the crystal phases of the pastes are determined by XRD for 3 replacement levels (1%, 3%, and 5%) at different ages. Thereafter, the impact of FC on mechanical and environmental performances of cement is assessed according to EN 196-1 and the Dutch Soil Quality Decree, respectively. The investigation of the reaction products evidences the formation of layered double hydroxides (LDH), in the early stage of the reaction. Mechanically the presence of FC results in a reduction of 28 days compressive strength by 8% for a replacement of 5% wt., compared with the pure CEM I 52.5 R without any MAC. In contrast, the flexural strength is not affected by the presence of FC. Environmentally, the Dutch legislation for the leaching of contaminants for unshaped (granular) material is satisfied. Based on the collected results, FC represents a suitable candidate as MAC in cement production.

Keywords: environmental impact evaluation, Minor additional constituent, MSWI residues, X-ray diffraction crystallography

Procedia PDF Downloads 136
406 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture

Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz

Abstract:

Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.

Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV

Procedia PDF Downloads 78
405 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels

Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand

Abstract:

The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.

Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing

Procedia PDF Downloads 282
404 Prevalence of Behavioral and Emotional Problems in School Going Adolescents in India

Authors: Anshu Gupta, Charu Gupta

Abstract:

Background: Adolescence is the transitional period between puberty and adulthood. It is marked by immense turmoil in emotional and behavioral spheres. Adolescents are at risk of an array of behavioral and emotional problems, resulting in social, academic and vocational function impairments. Conflicts in the family and inability of the parents to cope with the changing demands of an adolescent have a negative impact on the overall development of the child. This augers ill for the individual’s future, resulting in depression, delinquency and suicides among other problems. Aim: The aim of the study was to compare the prevalence of behavioral and emotional problems in school going adolescents aged 13 to 15 years residing in Ludhiana city. Method: A total of 1380 school children in the age group of 13 to 15 years were assessed by the adolescent health screening questionnaire (FAPS) and Youth Self-Report (2001) questionnaire. Statistical significance was ascertained by t-test, chi-square test (x²) and ANOVA, as appropriate. Results: A considerably high prevalence of behavioral and emotional problems was found in school going adolescents (26.5%), more in girls (31.7%) than in boys (24.4%). In case of boys, the maximum problem was in the 13 year age group, i.e., 28.2%, followed by a significant decline by the age of 14 years, i.e., 24.2% and 15 years, i.e., 19.6%. In case of girls also, the maximum problem was in the 13 year age group, i.e., 32.4% followed by a marginal decline in the 14 years i.e., 31.8% and 15 year age group, i.e., 30.2%. Demographic factors were non contributory. Internalizing syndrome (22.4%) was the most common problem followed by the neither internalizing nor externalizing (17.6%) group. In internalizing group, most (26.5%) of the students were observed to be anxious/ depressed. Social problem was observed to be the most frequent (10.6%) among neither internalizing nor externalizing group. Aggressive behavior was the commonest (8.4%) among externalizing group. Internalizing problems, mainly anxiety and depression, were commoner in females (30.6%) than males (24.6%). More boys (16%) than girls (13.4%) were reported to suffer from externalizing disorders. A critical review of the data showed that most of the adolescents had poor knowledge about reproductive health. Almost 36% reported that the source of their information on sexual and reproductive health being friends and the electronic media. There was a high percentage of adolescents who reported being worried about sexual abuse (20.2%) with majority of them being girls (93.6%) reflecting poorly on the social setup in the country. About 41% of adolescents reported being concerned about body weight and most of them being girls (92.4%). Up to 14.5% reported having thoughts of using alcohol or drugs perhaps due to the easy availability of substances of abuse in this part of the country. 12.8% (mostly girls) reported suicidal thoughts. Summary/conclusion: There is a high prevalence of emotional and behavioral problems among school-going adolescents. Resolution of these problems during adolescence is essential for attaining a healthy adulthood. The need of the hour is to spread awareness among caregivers and formulation of effective management strategies including school mental health programme.

Keywords: adolescence, behavioral, emotional, internalizing problem

Procedia PDF Downloads 253
403 Secure Optimized Ingress Filtering in Future Internet Communication

Authors: Bander Alzahrani, Mohammed Alreshoodi

Abstract:

Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.

Keywords: forwarding identifier, filling factor, information centric network, topology manager

Procedia PDF Downloads 132
402 Making Meaning, Authenticity, and Redefining a Future in Former Refugees and Asylum Seekers Detained in Australia

Authors: Lynne McCormack, Andrew Digges

Abstract:

Since 2013, the Australian government has enforced mandatory detention of anyone arriving in Australia without a valid visa, including those subsequently identified as a refugee or seeking asylum. While consistent with the increased use of immigration detention internationally, Australia’s use of offshore processing facilities both during and subsequent to refugee status determination processing has until recently remained a unique feature of Australia’s program of deterrence. The commonplace detention of refugees and asylum seekers following displacement is a significant and independent source of trauma and a contributory factor in adverse psychological outcomes. Officially, these individuals have no prospect of resettlement in Australia, are barred from applying for substantive visas, and are frequently and indefinitely detained in closed facilities such as immigration detention centres, or alternative places of detention, including hotels. It is also important to note that the limited access to Australia’s immigration detention population made available to researchers often means that data available for secondary analysis may be incomplete or delayed in its release. Further, studies into the lived experience of refugees and asylum seekers are typically cross-sectional and convenience sampled, employing a variety of designs and research methodologies that limit comparability and focused on the immediacy of the individual’s experience. Consequently, how former detainees make sense of their experience, redefine their future trajectory upon release, and recover a sense of authenticity and purpose, is unknown. As such, the present study sought the positive and negative subjective interpretations of 6 participants in Australia regarding their lived experiences as refugees and asylum seekers within Australia’s immigration detention system and its impact on their future sense of self. It made use of interpretative phenomenological analysis (IPA), a qualitative research methodology that is interested in how individuals make sense of, and ascribe meaning to, their unique lived experiences of phenomena. Underpinned by phenomenology, hermeneutics, and critical realism, this idiographic study aimed to explore both positive and negative subjective interpretations of former refugees and asylum seekers held in detention in Australia. It sought to understand how they make sense of their experiences, how detention has impacted their overall journey as displaced persons, and how they have moved forward in the aftermath of protracted detention in Australia. Examining the unique lived experiences of previously detained refugees and asylum seekers may inform the future development of theoretical models of posttraumatic growth among this vulnerable population, thereby informing the delivery of future mental health and resettlement services.

Keywords: mandatory detention, refugee, asylum seeker, authenticity, Interpretative phenomenological analysis

Procedia PDF Downloads 73
401 Synthesis of Methanol through Photocatalytic Conversion of CO₂: A Green Chemistry Approach

Authors: Sankha Chakrabortty, Biswajit Ruj, Parimal Pal

Abstract:

Methanol is one of the most important chemical products and intermediates. It can be used as a solvent, intermediate or raw material for a number of higher valued products, fuels or additives. From the last one decay, the total global demand of methanol has increased drastically which forces the scientists to produce a large amount of methanol from a renewable source to meet the global demand with a sustainable way. Different types of non-renewable based raw materials have been used for the synthesis of methanol on a large scale which makes the process unsustainable. In this circumstances, photocatalytic conversion of CO₂ into methanol under solar/UV excitation becomes a viable approach to give a sustainable production approach which not only meets the environmental crisis by recycling CO₂ to fuels but also reduces CO₂ amount from the atmosphere. Development of such sustainable production approach for CO₂ conversion into methanol still remains a major challenge in the current research comparing with conventional energy expensive processes. In this backdrop, the development of environmentally friendly materials, like photocatalyst has taken a great perspective for methanol synthesis. Scientists in this field are always concerned about finding an improved photocatalyst to enhance the photocatalytic performance. Graphene-based hybrid and composite materials with improved properties could be a better nanomaterial for the selective conversion of CO₂ to methanol under visible light (solar energy) or UV light. The present invention relates to synthesis an improved heterogeneous graphene-based photocatalyst with improved catalytic activity and surface area. Graphene with enhanced surface area is used as coupled material of copper-loaded titanium oxide to improve the electron capture and transport properties which substantially increase the photoinduced charge transfer and extend the lifetime of photogenerated charge carriers. A fast reduction method through H₂ purging has been adopted to synthesis improved graphene whereas ultrasonication based sol-gel method has been applied for the preparation of graphene coupled copper loaded titanium oxide with some enhanced properties. Prepared photocatalysts were exhaustively characterized using different characterization techniques. Effects of catalyst dose, CO₂ flow rate, reaction temperature and stirring time on the efficacy of the system in terms of methanol yield and productivity have been studied in the present study. The study shown that the newly synthesized photocatalyst with an enhanced surface resulting in a sustained productivity and yield of methanol 0.14 g/Lh, and 0.04 g/gcat respectively, after 3 h of illumination under UV (250W) at an optimum catalyst dosage of 10 g/L having 1:2:3 (Graphene: TiO₂: Cu) weight ratio.

Keywords: renewable energy, CO₂ capture, photocatalytic conversion, methanol

Procedia PDF Downloads 89
400 Quantum Conductance Based Mechanical Sensors Fabricated with Closely Spaced Metallic Nanoparticle Arrays

Authors: Min Han, Di Wu, Lin Yuan, Fei Liu

Abstract:

Mechanical sensors have undergone a continuous evolution and have become an important part of many industries, ranging from manufacturing to process, chemicals, machinery, health-care, environmental monitoring, automotive, avionics, and household appliances. Concurrently, the microelectronics and microfabrication technology have provided us with the means of producing mechanical microsensors characterized by high sensitivity, small size, integrated electronics, on board calibration, and low cost. Here we report a new kind of mechanical sensors based on the quantum transport process of electrons in the closely spaced nanoparticle films covering a flexible polymer sheet. The nanoparticle films were fabricated by gas phase depositing of preformed metal nanoparticles with a controlled coverage on the electrodes. To amplify the conductance of the nanoparticle array, we fabricated silver interdigital electrodes on polyethylene terephthalate(PET) by mask evaporation deposition. The gaps of the electrodes ranged from 3 to 30μm. Metal nanoparticles were generated from a magnetron plasma gas aggregation cluster source and deposited on the interdigital electrodes. Closely spaced nanoparticle arrays with different coverage could be gained through real-time monitoring the conductance. In the film coulomb blockade and quantum, tunneling/hopping dominate the electronic conduction mechanism. The basic principle of the mechanical sensors relies on the mechanical deformation of the fabricated devices which are translated into electrical signals. Several kinds of sensing devices have been explored. As a strain sensor, the device showed a high sensitivity as well as a very wide dynamic range. A gauge factor as large as 100 or more was demonstrated, which can be at least one order of magnitude higher than that of the conventional metal foil gauges or even better than that of the semiconductor-based gauges with a workable maximum applied strain beyond 3%. And the strain sensors have a workable maximum applied strain larger than 3%. They provide the potential to be a new generation of strain sensors with performance superior to that of the currently existing strain sensors including metallic strain gauges and semiconductor strain gauges. When integrated into a pressure gauge, the devices demonstrated the ability to measure tiny pressure change as small as 20Pa near the atmospheric pressure. Quantitative vibration measurements were realized on a free-standing cantilever structure fabricated with closely-spaced nanoparticle array sensing element. What is more, the mechanical sensor elements can be easily scaled down, which is feasible for MEMS and NEMS applications.

Keywords: gas phase deposition, mechanical sensors, metallic nanoparticle arrays, quantum conductance

Procedia PDF Downloads 254
399 Thinking Historiographically in the 21st Century: The Case of Spanish Musicology, a History of Music without History

Authors: Carmen Noheda

Abstract:

This text provides a reflection on the way of thinking about the study of the history of music by examining the production of historiography in Spain at the turn of the century. Based on concepts developed by the historical theorist Jörn Rüsen, the article focuses on the following aspects: the theoretical artifacts that structure the interpretation of the limits of writing the history of music, the narrative patterns used to give meaning to the discourse of history, and the orientation context that functions as a source of criteria of significance for both interpretation and representation. This analysis intends to show that historical music theory is not only a means to abstractly explore the complex questions connected to the production of historical knowledge, but also a tool for obtaining concrete images about the intellectual practice of professional musicologists. Writing about the historiography of contemporary Spanish music is a task that requires both a knowledge of the history that is being written and investigated, as well as a familiarity with current theoretical trends and methodologies that allow for the recognition and definition of the different tendencies that have arisen in recent decades. With the objective of carrying out these premises, this project takes as its point of departure the 'immediate historiography' in relation to Spanish music at the beginning of the 21st century. The hesitation that Spanish musicology has shown in opening itself to new anthropological and sociological approaches, along with its rigidity in the face of the multiple shifts in dynamic forms of thinking about history, have produced a standstill whose consequences can be seen in the delayed reception of the historiographical revolutions that have emerged in the last century. Methodologically, this essay is underpinned by Rüsen’s notion of the disciplinary matrix, which is an important contribution to the understanding of historiography. Combined with his parallel conception of differing paradigms of historiography, it is useful for analyzing the present-day forms of thinking about the history of music. Following these theories, the article will in the first place address the characteristics and identification of present historiographical currents in Spanish musicology to thereby carry out an analysis based on the theories of Rüsen. Finally, it will establish some considerations for the future of musical historiography, whose atrophy has not only fostered the maintenance of an ingrained positivist tradition, but has also implied, in the case of Spain, an absence of methodological schools and an insufficient participation in international theoretical debates. An update of fundamental concepts has become necessary in order to understand that thinking historically about music demands that we remember that subjects are always linked by reciprocal interdependencies that structure and define what it is possible to create. In this sense, the fundamental aim of this research departs from the recognition that the history of music is embedded in the conditions that make it conceivable, communicable and comprehensible within a society.

Keywords: historiography, Jörn Rüssen, Spanish musicology, theory of history of music

Procedia PDF Downloads 167
398 The Highly Dispersed WO3-x Photocatalyst over the Confinement Effect of Mesoporous SBA-15 Molecular Sieves for Photocatalytic Nitrogen Reduction

Authors: Xiaoling Ren, Guidong Yang

Abstract:

As one of the largest industrial synthetic chemicals in the world, ammonia has the advantages of high energy density, easy liquefaction, and easy transportation, which is widely used in agriculture, chemical industry, energy storage, and other fields. The industrial Haber-Bosch method process for ammonia synthesis is generally conducted under severe conditions. It is essential to develop a green, sustainable strategy for ammonia production to meet the growing demand. In this direction, photocatalytic nitrogen reduction has huge advantages over the traditional, well-established Haber-Bosch process, such as the utilization of natural sun light as the energy source and significantly lower pressure and temperature to affect the reaction process. However, the high activation energy of nitrogen and the low efficiency of photo-generated electron-hole separation in the photocatalyst result in low ammonia production yield. Many researchers focus on improving the catalyst. In addition to modifying the catalyst, improving the dispersion of the catalyst and making full use of active sites are also means to improve the overall catalytic activity. Few studies have been carried out on this, which is the aim of this work. In this work, by making full use of the nitrogen activation ability of WO3-x with defective sites, small size WO3-x photocatalyst with high dispersibility was constructed, while the growth of WO3-x was restricted by using a high specific surface area mesoporous SBA-15 molecular sieve with the regular pore structure as a template. The morphology of pure SBA-15 and WO3-x/SBA-15 was characterized byscanning electron microscopy (SEM). Compared with pure SBA-15, some small particles can be found in the WO3-x/SBA-15 material, which means that WO3-x grows into small particles under the limitation of SBA-15, which is conducive to the exposure of catalytically active sites. To elucidate the chemical nature of the material, the X-ray diffraction (XRD) analysis was conducted. The observed diffraction pattern inWO3-xis in good agreement with that of the JCPDS file no.71-2450. Compared with WO3-x, no new peaks appeared in WO3-x/SBA-15.It can be concluded that WO3-x/SBA-15 was synthesized successfully. In order to provide more active sites, the mass content of WO3-x was optimized. Then the photocatalytic nitrogen reduction performances of above samples were performed with methanol as a hole scavenger. The results show that the overall ammonia production performance of WO3-x/SBA-15 is improved than pure bulk WO3-x. The above results prove that making full use of active sites is also a means to improve overall catalytic activity.This work provides material basis for the design of high-efficiency photocatalytic nitrogen reduction catalysts.

Keywords: ammonia, photocatalytic, nitrogen reduction, WO3-x, high dispersibility

Procedia PDF Downloads 132
397 Innovative Technologies of Distant Spectral Temperature Control

Authors: Leonid Zhukov, Dmytro Petrenko

Abstract:

Optical thermometry has no alternative in many cases of industrial most effective continuous temperature control. Classical optical thermometry technologies can be used on available for pyrometers controlled objects with stable radiation characteristics and transmissivity of the intermediate medium. Without using temperature corrections, it is possible in the case of a “black” body for energy pyrometry and the cases of “black” and “grey” bodies for spectral ratio pyrometry or with using corrections – for any colored bodies. Consequently, with increasing the number of operating waves, optical thermometry possibilities to reduce methodical errors significantly expand. That is why, in recent 25-30 years, research works have been reoriented on more perfect spectral (multicolor) thermometry technologies. There are two physical material substances, i.e., substance (controlled object) and electromagnetic field (thermal radiation), to be operated in optical thermometry. Heat is transferred by radiation; therefore, radiation has the energy, entropy, and temperature. Optical thermometry was originating simultaneously with the developing of thermal radiation theory when the concept and the term "radiation temperature" was not used, and therefore concepts and terms "conditional temperatures" or "pseudo temperature" of controlled objects were introduced. They do not correspond to the physical sense and definitions of temperature in thermodynamics, molecular-kinetic theory, and statistical physics. Launched by the scientific thermometric society, discussion about the possibilities of temperature measurements of objects, including colored bodies, using the temperatures of their radiation is not finished. Are the information about controlled objects transferred by their radiation enough for temperature measurements? The positive and negative answers on this fundamental question divided experts into two opposite camps. Recent achievements of spectral thermometry develop events in her favour and don’t leave any hope for skeptics. This article presents the results of investigations and developments in the field of spectral thermometry carried out by the authors in the Department of Thermometry and Physics-Chemical Investigations. The authors have many-year’s of experience in the field of modern optical thermometry technologies. Innovative technologies of optical continuous temperature control have been developed: symmetric-wave, two-color compensative, and based on obtained nonlinearity equation of spectral emissivity distribution linear, two-range, and parabolic. Тhe technologies are based on direct measurements of physically substantiated and proposed by Prof. L. Zhukov, radiation temperatures with the next calculation of the controlled object temperature using this radiation temperatures and corresponding mathematical models. Тhe technologies significantly increase metrological characteristics of continuous contactless and light-guide temperature control in energy, metallurgical, ceramic, glassy, and other productions. For example, under the same conditions, the methodical errors of proposed technologies are less than the errors of known spectral and classical technologies in 2 and 3-13 times, respectively. Innovative technologies provide quality products obtaining at the lowest possible resource-including energy costs. More than 600 publications have been published on the completed developments, including more than 100 domestic patents, as well as 34 patents in Australia, Bulgaria, Germany, France, Canada, the USA, Sweden, and Japan. The developments have been implemented in the enterprises of USA, as well as Western Europe and Asia, including Germany and Japan.

Keywords: emissivity, radiation temperature, object temperature, spectral thermometry

Procedia PDF Downloads 74
396 Genetics of Pharmacokinetic Drug-Drug Interactions of Most Commonly Used Drug Combinations in the UK: Uncovering Unrecognised Associations

Authors: Mustafa Malki, Ewan R. Pearson

Abstract:

Tools utilized by health care practitioners to flag potential adverse drug reactions secondary to drug-drug interactions ignore individual genetic variation, which has the potential to markedly alter the severity of these interactions. To our best knowledge, there have been limited published studies on the impact of genetic variation on drug-drug interactions. Therefore, our aim in this project is the discovery of previously unrecognized, clinically important drug-drug-gene interactions (DDGIs) within the list of most commonly used drug combinations in the UK. The UKBB database was utilized to identify the top most frequently prescribed drug combinations in the UK with at least one route of interaction (over than 200 combinations were identified). We have recognised 37 common and unique interacting genes considering all of our drug combinations. Out of around 600 potential genetic variants found in these 37 genes, 100 variants have met the selection criteria (common variant with minor allele frequency ≥ 5%, independence, and has passed HWE test). The association between these variants and the use of each of our top drug combinations has been tested with a case-control analysis under the log-additive model. As the data is cross-sectional, drug intolerance has been identified from the genotype distribution as presented by the lower percentage of patients carrying the risky allele and on the drug combination compared to those free of these risk factors and vice versa with drug tolerance. In GoDARTs database, the same list of common drug combinations identified by the UKBB was utilized here with the same list of candidate genetic variants but with the addition of 14 new SNPs so that we have a total of 114 variants which have met the selection criteria in GoDARTs. From the list of the top 200 drug combinations, we have selected 28 combinations where the two drugs in each combination are known to be used chronically. For each of our 28 combinations, three drug response phenotypes have been identified (drug stop/switch, dose decrease, or dose increase of any of the two drugs during their interaction). The association between each of the three phenotypes belonging to each of our 28 drug combinations has been tested against our 114 candidate genetic variants. The results show replication of four findings between both databases : (1) Omeprazole +Amitriptyline +rs2246709 (A > G) variant in CYP3A4 gene (p-values and ORs with the UKBB and GoDARTs respectively = 0.048,0.037,0.92,and 0.52 (dose increase phenotype)) (2) Simvastatin + Ranitidine + rs9332197 (T > C) variant in CYP2C9 gene (0.024,0.032,0.81, and 5.75 (drug stop/switch phenotype)) (3) Atorvastatin + Doxazosin + rs9282564 (T > C) variant in ABCB1 gene (0.0015,0.0095,1.58,and 3.14 (drug stop/switch phenotype)) (4) Simvastatin + Nifedipine + rs2257401 (C > G) variant in CYP3A7 gene (0.025,0.019,0.77,and 0.30 (drug stop/switch phenotype)). In addition, some other non-replicated, but interesting, significant findings were detected. Our work also provides a great source of information for researchers interested in DD, DG, or DDG interactions studies as it has highlighted the top common drug combinations in the UK with recognizing 114 significant genetic variants related to drugs' pharmacokinetic.

Keywords: adverse drug reactions, common drug combinations, drug-drug-gene interactions, pharmacogenomics

Procedia PDF Downloads 131
395 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction

Authors: Alisawi Alaa T., Collins P. E. F.

Abstract:

The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.

Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard

Procedia PDF Downloads 73
394 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis

Authors: Kimberly Samaha

Abstract:

In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.

Keywords: bio-economy, investment risk, circular design, economic modelling

Procedia PDF Downloads 82
393 Moths of Indian Himalayas: Data Digging for Climate Change Monitoring

Authors: Angshuman Raha, Abesh Kumar Sanyal, Uttaran Bandyopadhyay, Kaushik Mallick, Kamalika Bhattacharyya, Subrata Gayen, Gaurab Nandi Das, Mohd. Ali, Kailash Chandra

Abstract:

Indian Himalayan Region (IHR), due to its sheer latitudinal and altitudinal expanse, acts as a mixing ground for different zoogeographic faunal elements. The innumerable unique and distributional restricted rare species of IHR are constantly being threatened with extinction by the ongoing climate change scenario. Many of which might have faced extinction without even being noticed or discovered. Monitoring the community dynamics of a suitable taxon is indispensable to assess the effect of this global perturbation at micro-habitat level. Lepidoptera, particularly moths are suitable for this purpose due to their huge diversity and strict herbivorous nature. The present study aimed to collate scattered historical records of moths from IHR and spatially disseminate the same in Geographic Information System (GIS) domain. The study also intended to identify moth species with significant altitudinal shifts which could be prioritised for monitoring programme to assess the effect of climate change on biodiversity. A robust database on moths recorded from IHR was prepared from voluminous secondary literature and museum collections. Historical sampling points were transformed into richness grids which were spatially overlaid on altitude, annual precipitation and vegetation layers separately to show moth richness patterns along major environmental gradients. Primary samplings were done by setting standard light traps at 11 Protected Areas representing five Indian Himalayan biogeographic provinces. To identify significant altitudinal shifts, past and present altitudinal records of the identified species from primary samplings were compared. A consolidated list of 4107 species belonging to 1726 genera of 62 families of moths was prepared from a total of 10,685 historical records from IHR. Family-wise assemblage revealed Erebidae to be the most speciose family with 913 species under 348 genera, followed by Geometridae with 879 species under 309 genera and Noctuidae with 525 species under 207 genera. Among biogeographic provinces, Central Himalaya represented maximum records with 2248 species, followed by Western and North-western Himalaya with 1799 and 877 species, respectively. Spatial analysis revealed species richness was more or less uniform (up to 150 species record per cell) across IHR. Throughout IHR, the middle elevation zones between 1000-2000m encompassed high species richness. Temperate coniferous forest associated with 1500-2000mm rainfall zone showed maximum species richness. Total 752 species of moths were identified representing 23 families from the present sampling. 13 genera were identified which were restricted to specialized habitats of alpine meadows over 3500m. Five historical localities with high richness of >150 species were selected which could be considered for repeat sampling to assess climate change influence on moth assemblage. Of the 7 species exhibiting significant altitudinal ascend of >2000m, Trachea auriplena, Diphtherocome fasciata (Noctuidae) and Actias winbrechlini (Saturniidae) showed maximum range shift of >2500m, indicating intensive monitoring of these species. Great Himalayan National Park harbours most diverse assemblage of high-altitude restricted species and should be a priority site for habitat conservation. Among the 13 range restricted genera, Arichanna, Opisthograptis, Photoscotosia (Geometridae), Phlogophora, Anaplectoides and Paraxestia (Noctuidae) were dominant and require rigorous monitoring, as they are most susceptible to climatic perturbations.

Keywords: altitudinal shifts, climate change, historical records, Indian Himalayan region, Lepidoptera

Procedia PDF Downloads 155
392 Sustainable Harvesting, Conservation and Analysis of Genetic Diversity in Polygonatum Verticillatum Linn.

Authors: Anchal Rana

Abstract:

Indian Himalayas with their diverse climatic conditions are home to many rare and endangered medicinal flora. One such species is Polygonatum verticillatum Linn., popularly known as King Solomon’s Seal or Solomon’s Seal. Its mention as an incredible medicinal herb comes from 5000 years ago in Indian Materia Medica as a component of Ashtavarga, a poly-herbal formulation comprising of eight herbs illustrated as world’s first ever revitalizing and rejuvenating nutraceutical food, which is now commercialised in the name ‘Chaywanprash’. It is an erect tall (60 to 120 cm) perennial herb with sessile, linear leaves and white pendulous flowers. The species grows well in an altitude range of 1600 to 3600 m amsl, and propagates mostly through rhizomes. The rhizomes are potential source for significant phytochemicals like flavonoids, phenolics, lectins, terpenoids, allantoin, diosgenin, β-Sitosterol and quinine. The presence of such phytochemicals makes the species an asset for antioxidant, cardiotonic, demulcent, diuretic, energizer, emollient, aphrodisiac, appetizer, glactagogue, etc. properties. Having profound concentrations of macro and micronutrients, species has fine prospects of being used as a diet supplement. However, due to unscientific and gregarious uprooting, it has been assigned a status of ‘vulnerable’ and ‘endangered’ in the Conservation Assessment and Management Plan (CAMP) process conducted by Foundation for Revitalisation of Local Health Traditions (FRLHT) during 2010, according to IUCN Red-List Criteria. Further, destructive harvesting, land use disturbances, heavy livestock grazing, climatic changes and habitat fragmentation have substantially contributed towards anomaly of the species. It, therefore, became imperative to conserve the diversity of the species and make judicious use in future research and commercial programme and schemes. A Gene Bank was therefore established at High Altitude Herbal Garden of the Forest Research Institute, Dehradun, India situated at Chakarata (30042’52.99’’N, 77051’36.77’’E, 2205 m amsl) consisting 149 accessions collected from thirty-one geographical locations spread over three Himalayan States of Jammu and Kashmir, Himachal Pradesh, and Uttarakhand. The present investigations purport towards sampling and collection of divergent germplasm followed by planting and cultivation techniques. The ultimate aim is thereby focussed on analysing genetic diversity of the species and capturing promising genotypes for carrying out further genetic improvement programme so to contribute towards sustainable development and healthcare.

Keywords: Polygonatum verticillatum Linn., phytochemicals, genetic diversity, conservation, gene bank

Procedia PDF Downloads 138
391 Graphic Procession Unit-Based Parallel Processing for Inverse Computation of Full-Field Material Properties Based on Quantitative Laser Ultrasound Visualization

Authors: Sheng-Po Tseng, Che-Hua Yang

Abstract:

Motivation and Objective: Ultrasonic guided waves become an important tool for nondestructive evaluation of structures and components. Guided waves are used for the purpose of identifying defects or evaluating material properties in a nondestructive way. While guided waves are applied for evaluating material properties, instead of knowing the properties directly, preliminary signals such as time domain signals or frequency domain spectra are first revealed. With the measured ultrasound data, inversion calculation can be further employed to obtain the desired mechanical properties. Methods: This research is development of high speed inversion calculation technique for obtaining full-field mechanical properties from the quantitative laser ultrasound visualization system (QLUVS). The quantitative laser ultrasound visualization system (QLUVS) employs a mirror-controlled scanning pulsed laser to generate guided acoustic waves traveling in a two-dimensional target. Guided waves are detected with a piezoelectric transducer located at a fixed location. With a gyro-scanning of the generation source, the QLUVS has the advantage of fast, full-field, and quantitative inspection. Results and Discussions: This research introduces two important tools to improve the computation efficiency. Firstly, graphic procession unit (GPU) with large amount of cores are introduced. Furthermore, combining the CPU and GPU cores, parallel procession scheme is developed for the inversion of full-field mechanical properties based on the QLUVS data. The newly developed inversion scheme is applied to investigate the computation efficiency for single-layered and double-layered plate-like samples. The computation efficiency is shown to be 80 times faster than unparalleled computation scheme. Conclusions: This research demonstrates a high-speed inversion technique for the characterization of full-field material properties based on quantitative laser ultrasound visualization system. Significant computation efficiency is shown, however not reaching the limit yet. Further improvement can be reached by improving the parallel computation. Utilizing the development of the full-field mechanical property inspection technology, full-field mechanical property measured by non-destructive, high-speed and high-precision measurements can be obtained in qualitative and quantitative results. The developed high speed computation scheme is ready for applications where full-field mechanical properties are needed in a nondestructive and nearly real-time way.

Keywords: guided waves, material characterization, nondestructive evaluation, parallel processing

Procedia PDF Downloads 178
390 Boussinesq Model for Dam-Break Flow Analysis

Authors: Najibullah M, Soumendra Nath Kuiry

Abstract:

Dams and reservoirs are perceived for their estimable alms to irrigation, water supply, flood control, electricity generation, etc. which civilize the prosperity and wealth of society across the world. Meantime the dam breach could cause devastating flood that can threat to the human lives and properties. Failures of large dams remain fortunately very seldom events. Nevertheless, a number of occurrences have been recorded in the world, corresponding in an average to one to two failures worldwide every year. Some of those accidents have caused catastrophic consequences. So it is decisive to predict the dam break flow for emergency planning and preparedness, as it poses high risk to life and property. To mitigate the adverse impact of dam break, modeling is necessary to gain a good understanding of the temporal and spatial evolution of the dam-break floods. This study will mainly deal with one-dimensional (1D) dam break modeling. Less commonly used in the hydraulic research community, another possible option for modeling the rapidly varied dam-break flows is the extended Boussinesq equations (BEs), which can describe the dynamics of short waves with a reasonable accuracy. Unlike the Shallow Water Equations (SWEs), the BEs taken into account the wave dispersion and non-hydrostatic pressure distribution. To capture the dam-break oscillations accurately it is very much needed of at least fourth-order accurate numerical scheme to discretize the third-order dispersion terms present in the extended BEs. The scope of this work is therefore to develop an 1D fourth-order accurate in both space and time Boussinesq model for dam-break flow analysis by using finite-volume / finite difference scheme. The spatial discretization of the flux and dispersion terms achieved through a combination of finite-volume and finite difference approximations. The flux term, was solved using a finite-volume discretization whereas the bed source and dispersion term, were discretized using centered finite-difference scheme. Time integration achieved in two stages, namely the third-order Adams Basforth predictor stage and the fourth-order Adams Moulton corrector stage. Implementation of the 1D Boussinesq model done using PYTHON 2.7.5. Evaluation of the performance of the developed model predicted as compared with the volume of fluid (VOF) based commercial model ANSYS-CFX. The developed model is used to analyze the risk of cascading dam failures similar to the Panshet dam failure in 1961 that took place in Pune, India. Nevertheless, this model can be used to predict wave overtopping accurately compared to shallow water models for designing coastal protection structures.

Keywords: Boussinesq equation, Coastal protection, Dam-break flow, One-dimensional model

Procedia PDF Downloads 213