Search results for: boundary integral equations
537 Improvement Performances of the Supersonic Nozzles at High Temperature Type Minimum Length Nozzle
Authors: W. Hamaidia, T. Zebbiche
Abstract:
This paper presents the design of axisymmetric supersonic nozzles, in order to accelerate a supersonic flow to the desired Mach number and that having a small weight, in the same time gives a high thrust. The concerned nozzle gives a parallel and uniform flow at the exit section. The nozzle is divided into subsonic and supersonic regions. The supersonic portion is independent to the upstream conditions of the sonic line. The subsonic portion is used to give a sonic flow at the throat. In this case, nozzle gives a uniform and parallel flow at the exit section. It’s named by minimum length Nozzle. The study is done at high temperature, lower than the dissociation threshold of the molecules, in order to improve the aerodynamic performances. Our aim consists of improving the performances both by the increase of exit Mach number and the thrust coefficient and by reduction of the nozzle's mass. The variation of the specific heats with the temperature is considered. The design is made by the Method of Characteristics. The finite differences method with predictor-corrector algorithm is used to make the numerical resolution of the obtained nonlinear algebraic equations. The application is for air. All the obtained results depend on three parameters which are exit Mach number, the stagnation temperature, the chosen mesh in characteristics. A numerical simulation of nozzle through Computational Fluid Dynamics-FASTRAN was done to determine and to confirm the necessary design parameters.Keywords: flux supersonic flow, axisymmetric minimum length nozzle, high temperature, method of characteristics, calorically imperfect gas, finite difference method, trust coefficient, mass of the nozzle, specific heat at constant pressure, air, error
Procedia PDF Downloads 150536 Discourse Analysis: Where Cognition Meets Communication
Authors: Iryna Biskub
Abstract:
The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.Keywords: cognition, communication, discourse, strategy
Procedia PDF Downloads 253535 Regression Analysis in Estimating Stream-Flow and the Effect of Hierarchical Clustering Analysis: A Case Study in Euphrates-Tigris Basin
Authors: Goksel Ezgi Guzey, Bihrat Onoz
Abstract:
The scarcity of streamflow gauging stations and the increasing effects of global warming cause designing water management systems to be very difficult. This study is a significant contribution to assessing regional regression models for estimating streamflow. In this study, simulated meteorological data was related to the observed streamflow data from 1971 to 2020 for 33 stream gauging stations of the Euphrates-Tigris Basin. Ordinary least squares regression was used to predict flow for 2020-2100 with the simulated meteorological data. CORDEX- EURO and CORDEX-MENA domains were used with 0.11 and 0.22 grids, respectively, to estimate climate conditions under certain climate scenarios. Twelve meteorological variables simulated by two regional climate models, RCA4 and RegCM4, were used as independent variables in the ordinary least squares regression, where the observed streamflow was the dependent variable. The variability of streamflow was then calculated with 5-6 meteorological variables and watershed characteristics such as area and height prior to the application. Of the regression analysis of 31 stream gauging stations' data, the stations were subjected to a clustering analysis, which grouped the stations in two clusters in terms of their hydrometeorological properties. Two streamflow equations were found for the two clusters of stream gauging stations for every domain and every regional climate model, which increased the efficiency of streamflow estimation by a range of 10-15% for all the models. This study underlines the importance of homogeneity of a region in estimating streamflow not only in terms of the geographical location but also in terms of the meteorological characteristics of that region.Keywords: hydrology, streamflow estimation, climate change, hydrologic modeling, HBV, hydropower
Procedia PDF Downloads 129534 Retrospective Analysis Demonstrates No Difference in Percutaneous Native Renal Biopsy Adequacy Between Nephrologists and Radiologists in University Hospital Crosshouse
Authors: Nicole Harley, Mahmoud Eid, Abdurahman Tarmal, Vishal Dey
Abstract:
Histological sampling plays an integral role in the diagnostic process of renal diseases. Percutaneous native renal biopsy is typically performed under ultrasound guidance, with this service usually being provided by nephrologists. In some centers, there is a role for radiologists in performing renal biopsies. Previous comparative studies have demonstrated non-inferiority between outcomes of percutaneous native renal biopsies performed by nephrologists compared with radiologists. We sought to compare biopsy adequacy between nephrologists and radiologists in University Hospital Crosshouse. The online system SERPR (Scottish Electronic Renal Patient Record) contains information pertaining to patients who have undergone renal biopsies. An online search was performed to acquire a list of all patients who underwent renal biopsy between 2013 and 2020 in University Hospital Crosshouse. 355 native renal biopsies were performed in total across this 7-year period. A retrospective analysis was performed on these cases, with records and reports being assessed for: the total number of glomeruli obtained per biopsy, whether the number of glomeruli obtained was adequate for diagnosis, as per an internationally agreed standard, and whether a histological diagnosis was achieved. Nephrologists performed 43.9% of native renal biopsies (n=156) and radiologists performed 56.1% (n=199). The mean number of glomeruli obtained by nephrologists was 17.16+/-10.31. The mean number of glomeruli obtained by radiologists was 18.38+/-10.55. T-test demonstrated no statistically significant difference between specialties comparatively (p-value 0.277). Native renal biopsies are required to obtain at least 8 glomeruli to be diagnostic as per internationally agreed criteria. Nephrologists met these criteria in 88.5% of native renal biopsies (n=138) and radiologists met this criteria in 89.5% (n=178). T-test and Chi-squared analysis demonstrate there was no statistically significant difference between the specialties comparatively (p-value 0.663 and 0.922, respectively). Biopsies performed by nephrologists yielded tissue that was diagnostic in 91.0% (n=142) of sampling. Biopsies performed by radiologists yielded tissue that was diagnostic in 92.4% (n=184) of sampling. T-test and Chi-squared analysis demonstrate there was no statistically significant difference between the specialties comparatively (p-value 0.625 and 0.889, respectively). This project demonstrates that at University Hospital Crosshouse, there is no statistical difference between radiologists and nephrologists in terms of glomeruli acquisition or samples achieving a histological diagnosis. Given the non-inferiority between specialties demonstrated by previous studies and this project, this evidence could support the restructuring of services to allow more renal biopsies to be performed by renal services and allow reallocation of radiology department resources.Keywords: biopsy, medical imaging, nephrology, radiology
Procedia PDF Downloads 81533 A Green Optically Active Hydrogen and Oxygen Generation System Employing Terrestrial and Extra-Terrestrial Ultraviolet Solar Irradiance
Authors: H. Shahid
Abstract:
Due to Ozone layer depletion on earth, the incoming ultraviolet (UV) radiation is recorded at its high index levels such as 25 in South Peru (13.5° S, 3360 m a.s.l.) Also, the planning of human inhabitation on Mars is under discussion where UV radiations are quite high. The exposure to UV is health hazardous and is avoided by UV filters. On the other hand, artificial UV sources are in use for water thermolysis to generate Hydrogen and Oxygen, which are later used as fuels. This paper presents the utility of employing UVA (315-400nm) and UVB (280-315nm) electromagnetic radiation from the solar spectrum to design and implement an optically active, Hydrogen and Oxygen generation system via thermolysis of desalinated seawater. The proposed system finds its utility on earth and can be deployed in the future on Mars (UVB). In this system, by using Fresnel lens arrays as an optical filter and via active tracking, the ultraviolet light from the sun is concentrated and then allowed to fall on two sub-systems of the proposed system. The first sub-system generates electrical energy by using UV based tandem photovoltaic cells such as GaAs/GaInP/GaInAs/GaInAsP and the second elevates temperature of water to lower the electric potential required to electrolyze the water. An empirical analysis is performed at 30 atm and an electrical potential is observed to be the main controlling factor for the rate of production of Hydrogen and Oxygen and hence the operating point (Q-Point) of the proposed system. The hydrogen production rate in the case of the commercial system in static mode (650ᵒC, 0.6V) is taken as a reference. The silicon oxide electrolyzer cell (SOEC) is used in the proposed (UV) system for the Hydrogen and Oxygen production. To achieve the same amount of Hydrogen as in the case of the reference system, with minimum chamber operating temperature of 850ᵒC in static mode, the corresponding required electrical potential is calculated as 0.3V. However, practically, the Hydrogen production rate is observed to be low in comparison to the reference system at 850ᵒC at 0.3V. However, it has been shown empirically that the Hydrogen production can be enhanced and by raising the electrical potential to 0.45V. It increases the production rate to the same level as is of the reference system. Therefore, 850ᵒC and 0.45V are assigned as the Q-point of the proposed system which is actively stabilized via proportional integral derivative controllers which adjust the axial position of the lens arrays for both subsystems. The functionality of the controllers is based on maintaining the chamber fixed at 850ᵒC (minimum operating temperature) and 0.45V; Q-Point to realize the same Hydrogen production rate as-is for the reference system.Keywords: hydrogen, oxygen, thermolysis, ultraviolet
Procedia PDF Downloads 133532 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic
Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx
Abstract:
Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM
Procedia PDF Downloads 203531 Harnessing Environmental DNA to Assess the Environmental Sustainability of Commercial Shellfish Aquaculture in the Pacific Northwest United States
Authors: James Kralj
Abstract:
Commercial shellfish aquaculture makes significant contributions to the economy and culture of the Pacific Northwest United States. The industry faces intense pressure to minimize environmental impacts as a result of Federal policies like the Magnuson-Stevens Fisheries Conservation and Management Act and the Endangered Species Act. These policies demand the protection of essential fish habitat and declare several salmon species as endangered. Consequently, numerous projects related to the protection and rehabilitation of eelgrass beds, a crucial ecosystem for countless fish species, have been proposed at both state and federal levels. Both eelgrass beds and commercial shellfish farms occupy the same physical space, and therefore understanding the effects of shellfish aquaculture on eelgrass ecosystems has become a top ecological and economic priority of both government and industry. This study evaluates the organismal communities that eelgrass and oyster aquaculture habitats support. Water samples were collected from Willapa Bay, Washington; Tillamook Bay, Oregon; Humboldt Bay, California; and Sammish Bay, Washington to compare species diversity in eelgrass beds, oyster aquaculture plots, and boundary edges between these two habitats. Diversity was assessed using a novel technique: environmental DNA (eDNA). All organisms constantly shed small pieces of DNA into their surrounding environment through the loss of skin, hair, tissues, and waste. In the marine environment, this DNA becomes suspended in the water column allowing it to be easily collected. Once extracted and sequenced, this eDNA can be used to paint a picture of all the organisms that live in a particular habitat making it a powerful technology for environmental monitoring. Industry professionals and government officials should consider these findings to better inform future policies regulating eelgrass beds and oyster aquaculture. Furthermore, the information collected in this study may be used to improve the environmental sustainability of commercial shellfish aquaculture while simultaneously enhancing its growth and profitability in the face of ever-changing political and ecological landscapes.Keywords: aquaculture, environmental DNA, shellfish, sustainability
Procedia PDF Downloads 246530 Antecedents and Impacts of Human Capital Flight in the Sub-Saharan Africa with Specific Reference to the Higher Education Sector: Conceptual Model
Authors: Zelalem B. Gurmessa, Ignatius W. Ferreira, Henry F. Wissink
Abstract:
The aim of this paper is to critically examine the factors contributing to academic brain drain in the Sub-Saharan Africa with specific reference to the higher education sector. Africa in general and Sub-Saharan African (SSA) countries, in particular, are experiencing an exodus of highly trained, qualified and competent human resources to other developing and developed countries thereby threatening the overall development of the relevant regions and impeding both public and private service delivery systems in the nation states. The region is currently in a dire situation in terms of health care services, education, science, and technology. The contribution of SSA countries to Science, Technology and Innovation is relatively minimal owing to the migration of skilled professionals due to both push and pull factors. The phenomenon calls for both international and trans-boundary, regional, national and institutional interventions to curb the exodus. Based on secondary data and the review of the literature, the article conceptualizes the antecedents and impacts of human capital flight or brain drain in the SSA countries from a higher education perspective. To this end, the article explores the magnitude, causes, and impacts of brain drain in the region. Despite the lack of consistent data on the magnitude of academic brain drain in the region, a critical analysis of the existing sources shows that pay disparity between developing and developed countries, the lack of enabling working conditions at source countries, fear of security due to political turmoil or unrest, the availability of green pastures and opportunity for development in the receiving countries were identified as major factors contributing to academic brain drain in the region. This hampers the socio-economic, technological and political development of the region. The paper also recommends that further research can be undertaken on the magnitude, causes, characteristics and impact of brain drain on the sustainability and competitiveness of SSA higher education institutions in the region.Keywords: brain drain, higher education, sub-Saharan Africa, sustainable development
Procedia PDF Downloads 258529 A Rapid Prototyping Tool for Suspended Biofilm Growth Media
Authors: Erifyli Tsagkari, Stephanie Connelly, Zhaowei Liu, Andrew McBride, William Sloan
Abstract:
Biofilms play an essential role in treating water in biofiltration systems. The biofilm morphology and function are inextricably linked to the hydrodynamics of flow through a filter, and yet engineers rarely explicitly engineer this interaction. We develop a system that links computer simulation and 3-D printing to optimize and rapidly prototype filter media to optimize biofilm function with the hypothesis that biofilm function is intimately linked to the flow passing through the filter. A computational model that numerically solves the incompressible time-dependent Navier Stokes equations coupled to a model for biofilm growth and function is developed. The model is imbedded in an optimization algorithm that allows the model domain to adapt until criteria on biofilm functioning are met. This is applied to optimize the shape of filter media in a simple flow channel to promote biofilm formation. The computer code links directly to a 3-D printer, and this allows us to prototype the design rapidly. Its validity is tested in flow visualization experiments and by microscopy. As proof of concept, the code was constrained to explore a small range of potential filter media, where the medium acts as an obstacle in the flow that sheds a von Karman vortex street that was found to enhance the deposition of bacteria on surfaces downstream. The flow visualization and microscopy in the 3-D printed realization of the flow channel validated the predictions of the model and hence its potential as a design tool. Overall, it is shown that the combination of our computational model and the 3-D printing can be effectively used as a design tool to prototype filter media to optimize biofilm formation.Keywords: biofilm, biofilter, computational model, von karman vortices, 3-D printing.
Procedia PDF Downloads 142528 Modelling of a Biomechanical Vertebral System for Seat Ejection in Aircrafts Using Lumped Mass Approach
Authors: R. Unnikrishnan, K. Shankar
Abstract:
In the case of high-speed fighter aircrafts, seat ejection is designed mainly for the safety of the pilot in case of an emergency. Strong windblast due to the high velocity of flight is one main difficulty in clearing the tail of the aircraft. Excessive G-forces generated, immobilizes the pilot from escape. In most of the cases, seats are ejected out of the aircrafts by explosives or by rocket motors attached to the bottom of the seat. Ejection forces are primarily in the vertical direction with the objective of attaining the maximum possible velocity in a specified period of time. The safe ejection parameters are studied to estimate the critical time of ejection for various geometries and velocities of flight. An equivalent analytical 2-dimensional biomechanical model of the human spine has been modelled consisting of vertebrae and intervertebral discs with a lumped mass approach. The 24 vertebrae, which consists of the cervical, thoracic and lumbar regions, in addition to the head mass and the pelvis has been designed as 26 rigid structures and the intervertebral discs are assumed as 25 flexible joint structures. The rigid structures are modelled as mass elements and the flexible joints as spring and damper elements. Here, the motions are restricted only in the mid-sagittal plane to form a 26 degree of freedom system. The equations of motions are derived for translational movement of the spinal column. An ejection force with a linearly increasing acceleration profile is applied as vertical base excitation on to the pelvis. The dynamic vibrational response of each vertebra in time-domain is estimated.Keywords: biomechanical model, lumped mass, seat ejection, vibrational response
Procedia PDF Downloads 231527 Optimal Power Distribution and Power Trading Control among Loads in a Smart Grid Operated Industry
Authors: Vivek Upadhayay, Siddharth Deshmukh
Abstract:
In recent years utilization of renewable energy sources has increased majorly because of the increase in global warming concerns. Organization these days are generally operated by Micro grid or smart grid on a small level. Power optimization and optimal load tripping is possible in a smart grid based industry. In any plant or industry loads can be divided into different categories based on their importance to the plant and power requirement pattern in the working days. Coming up with an idea to divide loads in different such categories and providing different power management algorithm to each category of load can reduce the power cost and can come handy in balancing stability and reliability of power. An objective function is defined which is subjected to a variable that we are supposed to minimize. Constraint equations are formed taking difference between the power usages pattern of present day and same day of previous week. By considering the objectives of minimal load tripping and optimal power distribution the proposed problem formulation is a multi-object optimization problem. Through normalization of each objective function, the multi-objective optimization is transformed to single-objective optimization. As a result we are getting the optimized values of power required to each load for present day by use of the past values of the required power for the same day of last week. It is quite a demand response scheduling of power. These minimized values then will be distributed to each load through an algorithm used to optimize the power distribution at a greater depth. In case of power storage exceeding the power requirement, profit can be made by selling exceeding power to the main grid.Keywords: power flow optimization, power trading enhancement, smart grid, multi-object optimization
Procedia PDF Downloads 525526 A View from inside: Case Study of Social Economy Actors in Croatia
Authors: Drazen Simlesa, Jelena Pudjak, Anita Tonkovic Busljeta
Abstract:
Regarding social economy (SE), Croatia is, on general level, considered as ex-communist country with good tradition, bad performance in second part of 20th Century because of political control in the business sector, which has in transition period (1990-1999) became a problem of ignorance in public administration (policy level). Today, social economy in Croatia is trying to catch up with other EU states on all important levels of SE sector: legislative and institutional framework, financial infrastructure, education and capacity building, and visibility. All four are integral parts of Strategy for the Development of Social Entrepreneurship in the Republic of Croatia for the period of 2015 – 2020. Within iPRESENT project, funded by Croatian Science Foundation, we have mapped social economy actors and after many years there is a clear and up to date social economy base. At the ICSE 2016 we will present main outcomes and results of this process. In the second year of the project we conducted a field research across Croatia carried out 19 focus groups with most influential, innovative and inspirational social economy actors. We divided interview questions in four themes: laws on social economy and public policies, definition/ideology of social economy and cooperation on SE scene, the level of democracy and working conditions, motivation and existence of intrinsic values. The data that are gathered through focus group interviews has been analysed via qualitative data analysis software (Atlas ti.). Major finding that will be presented in ICSA 2016 are: Social economy actors are mostly unsatisfied with legislative and institutional framework in Croatia and consider it as unsupportive and confusing. Social economy actors consider SE to be in the line with WISE model and as a tool for community development. The SE actors that are more active express satisfaction with cooperation amongst SE actors and other partners and stakeholders, but the ones that are in more isolated conditions (spatially) express need for more cooperation and networking. Social economy actors expressed their praise for democratic atmosphere in their organisations and fair working conditions. And finally, they expressed high motivation to continue to work in the social economy and are dedicated to the concept, including even those that were at the beginning interested just in getting a quick job. It means that we can detect intrinsic values for employees in social economy organisations. This research enabled us to describe for the first time in Croatia the view from the inside, attitudes and opinion of employees of social economy organisations.Keywords: employees, focus groups, mapping, social economy
Procedia PDF Downloads 253525 Multi-Objective Optimization of the Thermal-Hydraulic Behavior for a Sodium Fast Reactor with a Gas Power Conversion System and a Loss of off-Site Power Simulation
Authors: Avent Grange, Frederic Bertrand, Jean-Baptiste Droin, Amandine Marrel, Jean-Henry Ferrasse, Olivier Boutin
Abstract:
CEA and its industrial partners are designing a gas Power Conversion System (PCS) based on a Brayton cycle for the ASTRID Sodium-cooled Fast Reactor. Investigations of control and regulation requirements to operate this PCS during operating, incidental and accidental transients are necessary to adapt core heat removal. To this aim, we developed a methodology to optimize the thermal-hydraulic behavior of the reactor during normal operations, incidents and accidents. This methodology consists of a multi-objective optimization for a specific sequence, whose aim is to increase component lifetime by reducing simultaneously several thermal stresses and to bring the reactor into a stable state. Furthermore, the multi-objective optimization complies with safety and operating constraints. Operating, incidental and accidental sequences use specific regulations to control the thermal-hydraulic reactor behavior, each of them is defined by a setpoint, a controller and an actuator. In the multi-objective problem, the parameters used to solve the optimization are the setpoints and the settings of the controllers associated with the regulations included in the sequence. In this way, the methodology allows designers to define an optimized and specific control strategy of the plant for the studied sequence and hence to adapt PCS piloting at its best. The multi-objective optimization is performed by evolutionary algorithms coupled to surrogate models built on variables computed by the thermal-hydraulic system code, CATHARE2. The methodology is applied to a loss of off-site power sequence. Three variables are controlled: the sodium outlet temperature of the sodium-gas heat exchanger, turbomachine rotational speed and water flow through the heat sink. These regulations are chosen in order to minimize thermal stresses on the gas-gas heat exchanger, on the sodium-gas heat exchanger and on the vessel. The main results of this work are optimal setpoints for the three regulations. Moreover, Proportional-Integral-Derivative (PID) control setting is considered and efficient actuators used in controls are chosen through sensitivity analysis results. Finally, the optimized regulation system and the reactor control procedure, provided by the optimization process, are verified through a direct CATHARE2 calculation.Keywords: gas power conversion system, loss of off-site power, multi-objective optimization, regulation, sodium fast reactor, surrogate model
Procedia PDF Downloads 308524 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings
Authors: Youlu Huang, Huanjun Jiang
Abstract:
A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.Keywords: old buildings, adding story, seismic strengthening, seismic performance
Procedia PDF Downloads 121523 Historical Development of Bagh-e Dasht in Herat, Afghanistan: A Comprehensive Field Survey of Physical and Social Aspects
Authors: Khojesta Kawish, Tetsuya Ando, Sayed Abdul Basir Samimi
Abstract:
Bagh-e Dasht area is situated in the northern part of Herat, an old city in western Afghanistan located on the Silk Road which has received a strong influence from Persian culture. Initially, the Bagh-e Dasht area was developed for gardens and palaces near Joy-e Injil canal during the Timurid Empire in the 15th century. It is assumed Bagh-e Dasht became a settlement in the 16th century during the Safavid Empire. The oldest area is the southern part around the canal bank which is characterized by Dalans, sun-dried brick arcades above which houses are often constructed. Traditional houses in this area are built with domical vault roofs constructed with sun-dried bricks. Bagh-e Dasht is one of the best-preserved settlements of traditional houses in Herat. This study examines the transformation of the Bagh-e Dasht area with a focus on Dalans, where traditional houses with domical vault roofs have been well-preserved until today. The aim of the study is to examine the extent of physical changes to the area as well as changes to houses and the community. This research paper contains original results which have previously not been published in architectural history. The roof types of houses in the area are investigated through examining high resolution satellite images. The boundary of each building and space is determined by both a field survey and aerial photographs of the study area. A comprehensive field survey was then conducted to examine each space and building in the area. In addition, a questionnaire was distributed to the residents of the Dalan houses and interviews were conducted with the Wakil (Chief) of the area, a local historian, residents and traditional builders. The study finds that the oldest part of Bagh-e Dasht area, the south, contains both Dalans and domical vault roof houses. The next oldest part, which is the north, only has domical vault roof houses. The rest of the area only has houses with modernized flat roofs. This observation provides an insight into the process of historical development in the Bagh-e Dasht area.Keywords: Afghanistan, Bagh-e Dasht, Dalan, domical vault, Herat, over path house, traditional house
Procedia PDF Downloads 133522 Living at Density: Resident Perceptions in Auckland, New Zealand
Authors: Errol J. Haarhoff
Abstract:
Housing in New Zealand, particularly in Auckland, is dominated by low-density suburbs. Over the past 20 years, housing intensification policies aimed to curb outward low-density sprawl and to concentrate development within an urban boundary have been implemented. This requires the greater deployment of attached housing typologies such apartments, duplexes and terrace housing. There has been strong market response and uptake for higher density development, with the number of building approvals received by the Auckland Council for attached housing units increasing from around 15 percent in 2012/13, to 54 percent in 2017/18. A key question about intensification and strong market uptake in a city where lower density has been the norm, is whether higher density neighborhoods will deliver necessary housing satisfaction? This paper reports on the findings to a questionnaire survey and focus group discussions probing resident perceptions to living at higher density in relation to their dwellings, the neighborhood and their sense of community. The findings reveal strong overall housing satisfaction, including key aspects such as privacy, noise and living in close proximity to neighbors. However, when residents are differentiated in terms of length of tenure, age or whether they are bringing up children, greater variation in satisfaction is detected. For example, residents in the 65-plus age cohort express much higher levels of satisfaction, when compared to the 18-44 year cohorts who more likely to be binging up children. This suggests greater design sensitivity to better accommodate the range of household types. Those who have live in the area longer express greater satisfaction than those with shorter duration, indicating time for adaption to living at higher density. Findings strongly underpin the instrumental role that the public amenities play in overall housing satisfaction and the emergence of a strong sense of community. This underscores the necessity for appropriate investment in the public amenities often lacking in market-led higher density housing development. We conclude with an evaluation of the PPP model, and its part in delivering housing satisfaction. The findings should be of interest to cities, housing developers and built environment professional pursuing housing policies promoting intensification and higher density.Keywords: medium density, housing satisfaction, neighborhoods, sense of community
Procedia PDF Downloads 137521 Probabilistic Analysis of Bearing Capacity of Isolated Footing using Monte Carlo Simulation
Authors: Sameer Jung Karki, Gokhan Saygili
Abstract:
The allowable bearing capacity of foundation systems is determined by applying a factor of safety to the ultimate bearing capacity. Conventional ultimate bearing capacity calculations routines are based on deterministic input parameters where the nonuniformity and inhomogeneity of soil and site properties are not accounted for. Hence, the laws of mathematics like probability calculus and statistical analysis cannot be directly applied to foundation engineering. It’s assumed that the Factor of Safety, typically as high as 3.0, incorporates the uncertainty of the input parameters. This factor of safety is estimated based on subjective judgement rather than objective facts. It is an ambiguous term. Hence, a probabilistic analysis of the bearing capacity of an isolated footing on a clayey soil is carried out by using the Monte Carlo Simulation method. This simulated model was compared with the traditional discrete model. It was found out that the bearing capacity of soil was found higher for the simulated model compared with the discrete model. This was verified by doing the sensitivity analysis. As the number of simulations was increased, there was a significant % increase of the bearing capacity compared with discrete bearing capacity. The bearing capacity values obtained by simulation was found to follow a normal distribution. While using the traditional value of Factor of safety 3, the allowable bearing capacity had lower probability (0.03717) of occurring in the field compared to a higher probability (0.15866), while using the simulation derived factor of safety of 1.5. This means the traditional factor of safety is giving us bearing capacity that is less likely occurring/available in the field. This shows the subjective nature of factor of safety, and hence probability method is suggested to address the variability of the input parameters in bearing capacity equations.Keywords: bearing capacity, factor of safety, isolated footing, montecarlo simulation
Procedia PDF Downloads 187520 Internet Protocol Television: A Research Study of Undergraduate Students Analyze the Effects
Authors: Sabri Serkan Gulluoglu
Abstract:
The study is aimed at examining the effects of internet marketing with IPTV on human beings. Internet marketing with IPTV is emerging as an integral part of business strategies in today’s technologically advanced world and the business activities all over the world are influences with the emergence of this modern marketing tool. As the population of the Internet and on-line users’ increases, new research issues have arisen concerning the demographics and psychographics of the on-line user and the opportunities for a product or service. In recent years, we have seen a tendency of various services converging to the ubiquitous Internet Protocol based networks. Besides traditional Internet applications such as web browsing, email, file transferring, and so forth, new applications have been developed to replace old communication networks. IPTV is one of the solutions. In the future, we expect a single network, the IP network, to provide services that have been carried by different networks today. For finding some important effects of a video based technology market web site on internet, we determine to apply a questionnaire on university students. Recently some researches shows that in Turkey the age of people 20 to 24 use internet when they buy some electronic devices such as cell phones, computers, etc. In questionnaire there are ten categorized questions to evaluate the effects of IPTV when shopping. There were selected 30 students who are filling the question form after watching an IPTV channel video for 10 minutes. This sample IPTV channel is “buy.com”, it look like an e-commerce site with an integrated IPTV channel on. The questionnaire for the survey is constructed by using the Likert scale that is a bipolar scaling method used to measure either positive or negative response to a statement (Likert, R) it is a common system that is used is the surveys. By following the Likert Scale “the respondents are asked to indicate their degree of agreement with the statement or any kind of subjective or objective evaluation of the statement. Traditionally a five-point scale is used under this methodology”. For this study also the five point scale system is used and the respondents were asked to express their opinions about the given statement by picking the answer from the given 5 options: “Strongly disagree, Disagree, Neither agree Nor disagree, Agree and Strongly agree”. These points were also rates from 1-5 (Strongly disagree, Disagree, Neither disagree Nor agree, Agree, Strongly agree). On the basis of the data gathered from the questionnaire some results are drawn in order to get the figures and graphical representation of the study results that can demonstrate the outcomes of the research clearly.Keywords: IPTV, internet marketing, online, e-commerce, video based technology
Procedia PDF Downloads 240519 Affective Transparency in Compound Word Processing
Authors: Jordan Gallant
Abstract:
In the compound word processing literature, much attention has been paid to the relationship between a compound’s denotational meaning and that of its morphological whole-word constituents, which is referred to as ‘semantic transparency’. However, the parallel relationship between a compound’s connotation and that of its constituents has not been addressed at all. For instance, while a compound like ‘painkiller’ might be semantically transparent, it is not ‘affectively transparent’. That is, both constituents have primarily negative connotations, while the whole compound has a positive one. This paper investigates the role of affective transparency on compound processing using two methodologies commonly employed in this field: a lexical decision task and a typing task. The critical stimuli used were 112 English bi-constituent compounds that differed in terms of the effective transparency of their constituents. Of these, 36 stimuli contained constituents with similar connotations to the compound (e.g., ‘dreamland’), 36 contained constituents with more positive connotations (e.g. ‘bedpan’), and 36 contained constituents with more negative connotations (e.g. ‘painkiller’). Connotation of whole-word constituents and compounds were operationalized via valence ratings taken from an off-line ratings database. In Experiment 1, compound stimuli and matched non-word controls were presented visually to participants, who were then asked to indicate whether it was a real word in English. Response times and accuracy were recorded. In Experiment 2, participants typed compound stimuli presented to them visually. Individual keystroke response times and typing accuracy were recorded. The results of both experiments provided positive evidence that compound processing is influenced by effective transparency. In Experiment 1, compounds in which both constituents had more negative connotations than the compound itself were responded to significantly more slowly than compounds in which the constituents had similar or more positive connotations. Typed responses from Experiment 2 showed that inter-keystroke intervals at the morphological constituent boundary were significantly longer when the connotation of the head constituent was either more positive or more negative than that of the compound. The interpretation of this finding is discussed in the context of previous compound typing research. Taken together, these findings suggest that affective transparency plays a role in the recognition, storage, and production of English compound words. This study provides a promising first step in a new direction for research on compound words.Keywords: compound processing, semantic transparency, typed production, valence
Procedia PDF Downloads 127518 Fluidised Bed Gasification of Multiple Agricultural Biomass-Derived Briquettes
Authors: Rukayya Ibrahim Muazu, Aiduan Li Borrion, Julia A. Stegemann
Abstract:
Biomass briquette gasification is regarded as a promising route for efficient briquette use in energy generation, fuels and other useful chemicals, however, previous research work has focused on briquette gasification in fixed bed gasifiers such as updraft and downdraft gasifiers. Fluidised bed gasifier has the potential to be effectively sized for medium or large scale. This study investigated the use of fuel briquettes produced from blends of rice husks and corn cobs biomass residues, in a bubbling fluidised bed gasifier. The study adopted a combination of numerical equations and Aspen Plus simulation software to predict the product gas (syngas) composition based on briquette's density and biomass composition (blend ratio of rice husks to corn cobs). The Aspen Plus model was based on an experimentally validated model from the literature. The results based on a briquette size of 32 mm diameter and relaxed density range of 500 to 650 kg/m3 indicated that fluidisation air required in the gasifier increased with an increase in briquette density, and the fluidisation air showed to be the controlling factor compared with the actual air required for gasification of the biomass briquettes. The mass flowrate of CO2 in the predicted syngas composition, increased with an increase in the air flow rate, while CO production decreased and H2 was almost constant. The H2/CO ratio for various blends of rice husks and corn cobs did not significantly change at the designed process air, but a significant difference of 1.0 for H2/CO ratio was observed at higher air flow rate, and between 10/90 to 90/10 blend ratio of rice husks to corn cobs. This implies the need for further understanding of biomass variability and hydrodynamic parameters on syngas composition in biomass briquette gasification.Keywords: aspen plus, briquettes, fluidised bed, gasification, syngas
Procedia PDF Downloads 457517 Numerical Modeling of Air Shock Wave Generated by Explosive Detonation and Dynamic Response of Structures
Authors: Michał Lidner, Zbigniew SzcześNiak
Abstract:
The ability to estimate blast load overpressure properly plays an important role in safety design of buildings. The issue of studying of blast loading on structural elements has been explored for many years. However, in many literature reports shock wave overpressure is estimated with simplified triangular or exponential distribution in time. This indicates some errors when comparing real and numerical reaction of elements. Nonetheless, it is possible to further improve setting similar to the real blast load overpressure function versus time. The paper presents a method of numerical analysis of the phenomenon of the air shock wave propagation. It uses Finite Volume Method and takes into account energy losses due to a heat transfer with respect to an adiabatic process rule. A system of three equations (conservation of mass, momentum and energy) describes the flow of a volume of gaseous medium in the area remote from building compartments, which can inhibit the movement of gas. For validation three cases of a shock wave flow were analyzed: a free field explosion, an explosion inside a steel insusceptible tube (the 1D case) and an explosion inside insusceptible cube (the 3D case). The results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied. Finally, an overall good convergence of numerical results with experiments was achieved. Also the most important parameters were well reflected. Additionally analyses of dynamic response of one of considered structural element were made.Keywords: adiabatic process, air shock wave, explosive, finite volume method
Procedia PDF Downloads 192516 When the Lights Go Down in the Delivery Room: Lessons From a Ransomware Attack
Authors: Rinat Gabbay-Benziv, Merav Ben-Natan, Ariel Roguin, Benyamine Abbou, Anna Ofir, Adi Klein, Dikla Dahan-Shriki, Mordechai Hallak, Boris Kessel, Mickey Dudkiewicz
Abstract:
Introduction: Over recent decades, technology has become integral to healthcare, with electronic health records and advanced medical equipment now standard. However, this reliance has made healthcare systems increasingly vulnerable to ransomware attacks. On October 13, 2021, Hillel Yaffe Medical Center experienced a severe ransomware attack that disrupted all IT systems, including electronic health records, laboratory services, and staff communications. The attack, carried out by the group DeepBlueMagic, utilized advanced encryption to lock the hospital's systems and demanded a ransom. This incident caused significant operational and patient care challenges, particularly impacting the obstetrics department. Objective: The objective is to describe the challenges facing the obstetric division following a cyberattack and discuss ways of preparing for and overcoming another one. Methods: A retrospective descriptive study was conducted in a mid-sized medical center. Division activities, including the number of deliveries, cesarean sections, emergency room visits, admissions, maternal-fetal medicine department occupancy, and ambulatory encounters, from 2 weeks before the attack to 8 weeks following it (a total of 11 weeks), were compared with the retrospective period in 2019 (pre-COVID-19). In addition, we present the challenges and adaptation measures taken at the division and hospital levels leading up to the resumption of full division activity. Results: On the day of the cyberattack, critical decisions were made. The media announced the event, calling on patients not to come to our hospital. Also, all elective activities other than cesarean deliveries were stopped. The number of deliveries, admissions, and both emergency room and ambulatory clinic visits decreased by 5%–10% overall for 11 weeks, reflecting the decrease in division activity. Nevertheless, in all stations, there were sufficient activities and adaptation measures to ensure patient safety, decision-making, and workflow of patients were accounted for. Conclusions: The risk of ransomware cyberattacks is growing. Healthcare systems at all levels should recognize this threat and have protocols for dealing with them once they occur.Keywords: ransomware attack, healthcare cybersecurity, obstetrics challenges, IT system disruption
Procedia PDF Downloads 24515 Development of Novel Amphiphilic Block Copolymer of Renewable ε-Decalactone for Drug Delivery Application
Authors: Deepak Kakde, Steve Howdle, Derek Irvine, Cameron Alexander
Abstract:
The poor aqueous solubility is one of the major obstacles in the formulation development of many drugs. Around 70% of drugs are poorly soluble in aqueous media. In the last few decades, micelles have emerged as one of the major tools for solubilization of hydrophobic drugs. Micelles are nanosized structures (10-100nm) obtained by self-assembly of amphiphilic molecules into the water. The hydrophobic part of the micelle forms core which is surrounded by a hydrophilic outer shell called corona. These core-shell structures have been used as a drug delivery vehicle for many years. Although, the utility of micelles have been reduced due to the lack of sustainable materials. In the present study, a novel methoxy poly(ethylene glycol)-b-poly(ε-decalactone) (mPEG-b-PεDL) copolymer was synthesized by ring opening polymerization (ROP) of renewable ε-decalactone (ε-DL) monomers on methoxy poly(ethylene glycol) (mPEG) initiator using 1,5,7-triazabicyclo[4.4.0]dec-5-ene (TBD) as a organocatalyst. All the reactions were conducted in bulk to avoid the use of toxic organic solvents. The copolymer was characterized by nuclear magnetic resonance spectroscopy (NMR), gel permeation chromatography (GPC) and differential scanning calorimetry (DSC).The mPEG-b-PεDL block copolymeric micelles containing indomethacin (IND) were prepared by nanoprecipitation method and evaluated as drug delivery vehicle. The size of the micelles was less than 40nm with narrow polydispersity pattern. TEM image showed uniform distribution of spherical micelles defined by clear surface boundary. The indomethacin loading was 7.4% for copolymer with molecular weight of 13000 and drug/polymer weight ratio of 4/50. The higher drug/polymer ratio decreased the drug loading. The drug release study in PBS (pH7.4) showed a sustained release of drug over a period of 24hr. In conclusion, we have developed a new sustainable polymeric material for IND delivery by combining the green synthetic approach with the use of renewable monomer for sustainable development of polymeric nanomedicine.Keywords: dopolymer, ε-decalactone, indomethacin, micelles
Procedia PDF Downloads 295514 Free Will and Compatibilism in Decision Theory: A Solution to Newcomb’s Paradox
Authors: Sally Heyeon Hwang
Abstract:
Within decision theory, there are normative principles that dictate how one should act in addition to empirical theories of actual behavior. As a normative guide to one’s actual behavior, evidential or causal decision-theoretic equations allow one to identify outcomes with maximal utility values. The choice that each person makes, however, will, of course, differ according to varying assignments of weight and probability values. Regarding these different choices, it remains a subject of considerable philosophical controversy whether individual subjects have the capacity to exercise free will with respect to the assignment of probabilities, or whether instead the assignment is in some way constrained. A version of this question is given a precise form in Richard Jeffrey’s assumption that free will is necessary for Newcomb’s paradox to count as a decision problem. This paper will argue, against Jeffrey, that decision theory does not require the assumption of libertarian freedom. One of the hallmarks of decision-making is its application across a wide variety of contexts; the implications of a background assumption of free will is similarly varied. One constant across the contexts of decision is that there are always at least two levels of choice for a given agent, depending on the degree of prior constraint. Within the context of Newcomb’s problem, when the predictor is attempting to guess the choice the agent will make, he or she is analyzing the determined aspects of the agent such as past characteristics, experiences, and knowledge. On the other hand, as David Lewis’ backtracking argument concerning the relationship between past and present events brings to light, there are similarly varied ways in which the past can actually be dependent on the present. One implication of this argument is that even in deterministic settings, an agent can have more free will than it may seem. This paper will thus argue against the view that a stable background assumption of free will or determinism in decision theory is necessary, arguing instead for a compatibilist decision theory yielding a novel treatment of Newcomb’s problem.Keywords: decision theory, compatibilism, free will, Newcomb’s problem
Procedia PDF Downloads 321513 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials
Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov
Abstract:
Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.Keywords: reading, commercials, eye movements, EEG, polygraphic indicators
Procedia PDF Downloads 166512 Evaluation of Different Food Baits by Using Kill Traps for the Control of Lesser Bandicoot Rat (Bandicota bengalensis) in Field Crops of Pothwar Plateau, Pakistan
Authors: Nadeem Munawar, Iftikhar Hussain, Tariq Mahmood
Abstract:
The lesser bandicoot rat (Bandicota bengalensis) is widely distributed and a serious agricultural pest in Pakistan. It has wide adaptation with rice-wheat-sugarcane cropping systems of Punjab, Sindh and Khyber Pakhtunkhwa and wheat-groundnut cropping system of Pothwar area, thus inflicting heavy losses to these crops. Comparative efficacies of four food baits (onion, guava, potato and peanut butter smeared bread/Chapatti) were tested in multiple feeding tests for kill trapping of this rat species in the Pothwar Plateau between October 2013 to July 2014 at the sowing, tilling, flowering and maturity stages of wheat, groundnut and millet crops. The results revealed that guava was the most preferred bait as compared to the rest of three, presumably due to particular taste and smell of the guava. The relative efficacies of all four tested baits guava also scoring the highest trapping success of 16.94 ± 1.42 percent, followed by peanut butter, potato, and onion with trapping successes of 10.52 ± 1.30, 7.82 ± 1.21 and 4.5 ± 1.10 percent, respectively. In various crop stages and season-wise the highest trapping success was achieved at maturity stages of the crops, presumably due to higher surface activity of the rat because of favorable climatic conditions, good shelter, and food abundance. Moreover, the maturity stage of wheat crop coincided with spring breeding season and maturity stages of millet and groundnut match with monsoon/autumn breeding peak of the lesser bandicoot rat in Pothwar area. The preferred order among four baits tested was guava > peanut butter > potato > onion. The study recommends that the farmers should periodically carry out rodent trapping at the beginning of each crop season and during non-breeding seasons of this rodent pest when the populations are low in numbers and restricted under crop boundary vegetation, particularly during very hot and cold months.Keywords: Bandicota bengalensis, efficacy, food baits, Pothwar
Procedia PDF Downloads 268511 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm
Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi
Abstract:
To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm
Procedia PDF Downloads 237510 Vibration Control of a Horizontally Supported Rotor System by Using a Radial Active Magnetic Bearing
Authors: Vishnu A., Ashesh Saha
Abstract:
The operation of high-speed rotating machinery in industries is accompanied by rotor vibrations due to many factors. One of the primary instability mechanisms in a rotor system is the centrifugal force induced due to the eccentricity of the center of mass away from the center of rotation. These unwanted vibrations may lead to catastrophic fatigue failure. So, there is a need to control these rotor vibrations. In this work, control of rotor vibrations by using a 4-pole Radial Active Magnetic Bearing (RAMB) as an actuator is analysed. A continuous rotor system model is considered for the analysis. Several important factors, like the gyroscopic effect and rotary inertia of the shaft and disc, are incorporated into this model. The large deflection of the shaft and the restriction to axial motion of the shaft at the bearings result in nonlinearities in the system governing equation. The rotor system is modeled in such a way that the system dynamics can be related to the geometric and material properties of the shaft and disc. The mathematical model of the rotor system is developed by incorporating the control forces generated by the RAMB. A simple PD controller is used for the attenuation of system vibrations. An analytical expression for the amplitude and phase equations is derived using the Method of Multiple Scales (MMS). Analytical results are verified with the numerical results obtained using an ‘ode’ solver in-built into MATLAB Software. The control force is found to be effective in attenuating the system vibrations. The multi-valued solutions leading to the jump phenomenon are also eliminated with a proper choice of control gains. Most interestingly, the shape of the backbone curves can also be altered for certain values of control parameters.Keywords: rotor dynamics, continuous rotor system model, active magnetic bearing, PD controller, method of multiple scales, backbone curve
Procedia PDF Downloads 79509 [Keynote Talk]: Three Dimensional Finite Element Analysis of Functionally Graded Radiation Shielding Nanoengineered Sandwich Composites
Authors: Nasim Abuali Galehdari, Thomas J. Ryan, Ajit D. Kelkar
Abstract:
In recent years, nanotechnology has played an important role in the design of an efficient radiation shielding polymeric composites. It is well known that, high loading of nanomaterials with radiation absorption properties can enhance the radiation attenuation efficiency of shielding structures. However, due to difficulties in dispersion of nanomaterials into polymer matrices, there has been a limitation in higher loading percentages of nanoparticles in the polymer matrix. Therefore, the objective of the present work is to provide a methodology to fabricate and then to characterize the functionally graded radiation shielding structures, which can provide an efficient radiation absorption property along with good structural integrity. Sandwich structures composed of Ultra High Molecular Weight Polyethylene (UHMWPE) fabric as face sheets and functionally graded epoxy nanocomposite as core material were fabricated. A method to fabricate a functionally graded core panel with controllable gradient dispersion of nanoparticles is discussed. In order to optimize the design of functionally graded sandwich composites and to analyze the stress distribution throughout the sandwich composite thickness, a finite element method was used. The sandwich panels were discretized using 3-Dimensional 8 nodded brick elements. Classical laminate analysis in conjunction with simplified micromechanics equations were used to obtain the properties of the face sheets. The presented finite element model would provide insight into deformation and damage mechanics of the functionally graded sandwich composites from the structural point of view.Keywords: nanotechnology, functionally graded material, radiation shielding, sandwich composites, finite element method
Procedia PDF Downloads 469508 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete
Authors: Farzad Danaei, Yilmaz Akkaya
Abstract:
In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient
Procedia PDF Downloads 77