Search results for: Volterra integral equations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2596

Search results for: Volterra integral equations

436 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI

Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil

Abstract:

The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.

Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency

Procedia PDF Downloads 410
435 Magnetized Cellulose Nanofiber Extracted from Natural Resources for the Application of Hexavalent Chromium Removal Using the Adsorption Method

Authors: Kebede Gamo Sebehanie, Olu Emmanuel Femi, Alberto Velázquez Del Rosario, Abubeker Yimam Ali, Gudeta Jafo Muleta

Abstract:

Water pollution is one of the most serious worldwide issues today. Among water pollution, heavy metals are becoming a concern to the environment and human health due to their non-biodegradability and bioaccumulation. In this study, a magnetite-cellulose nanocomposite derived from renewable resources is employed for hexavalent chromium elimination by adsorption. Magnetite nanoparticles were synthesized directly from iron ore using solvent extraction and co-precipitation technique. Cellulose nanofiber was extracted from sugarcane bagasse using the alkaline treatment and acid hydrolysis method. Before and after the adsorption process, the MNPs-CNF composites were evaluated using X-ray diffraction (XRD), Scanning electron microscope (SEM), Fourier transform infrared (FTIR), and Vibrator sample magnetometer (VSM), and Thermogravimetric analysis (TGA). The impacts of several parameters such as pH, contact time, initial pollutant concentration, and adsorbent dose on adsorption efficiency and capacity were examined. The kinetic and isotherm adsorption of Cr (VI) was also studied. The highest removal was obtained at pH 3, and it took 80 minutes to establish adsorption equilibrium. The Langmuir and Freundlich isotherm models were used, and the experimental data fit well with the Langmuir model, which has a maximum adsorption capacity of 8.27 mg/g. The kinetic study of the adsorption process using pseudo-first-order and pseudo-second-order equations revealed that the pseudo-second-order equation was more suited for representing the adsorption kinetic data. Based on the findings, pure MNPs and MNPs-CNF nanocomposites could be used as effective adsorbents for the removal of Cr (VI) from wastewater.

Keywords: magnetite-cellulose nanocomposite, hexavalent chromium, adsorption, sugarcane bagasse

Procedia PDF Downloads 130
434 Effect of Wheat Germ Agglutinin- and Lactoferrin-Grafted Catanionic Solid Lipid Nanoparticles on Targeting Delivery of Etoposide to Glioblastoma Multiforme

Authors: Yung-Chih Kuo, I-Hsin Wang

Abstract:

Catanionic solid lipid nanoparticles (CASLNs) with surface wheat germ agglutinin (WGA) and lactoferrin (Lf) were formulated for entrapping and releasing etoposide (ETP), crossing the blood–brain barrier (BBB), and inhibiting the growth of glioblastoma multiforme (GBM). Microemulsified ETP-CASLNs were modified with WGA and Lf for permeating a cultured monolayer of human brain-microvascular endothelial cells (HBMECs) regulated by human astrocytes and for treating malignant U87MG cells. Experimental evidence revealed that an increase in the concentration of catanionic surfactant from 5 μM to 7.5 μM reduced the particle size. When the concentration of catanionic surfactant increased from 7.5 μM to 12.5 μM, the particle size increased, yielding a minimal diameter of WGA-Lf-ETP-CASLNs at 7.5 μM of catanionic surfactant. An increase in the weight percentage of BW from 25% to 75% enlarged WGA-Lf-ETP-CASLNs. In addition, an increase in the concentration of catanionic surfactant from 5 to 15 μM increased the absolute value of zeta potential of WGA-Lf-ETP-CASLNs. It was intriguing that the increment of the charge as a function of the concentration of catanionic surfactant was approximately linear. WGA-Lf-ETP-CASLNs revealed an integral structure with smooth particle contour, displayed a lighter exterior layer of catanionic surfactant, WGA, and Lf and showed a rigid interior region of solid lipids. A variation in the concentration of catanionic surfactant between 5 μM and 15 μM yielded a maximal encapsulation efficiency of ETP ata 7.5 μM of catanionic surfactant. An increase in the concentration of Lf/WGA decreased the grafting efficiency of Lf/WGA. Also, an increase in the weight percentage of ETP decreased its encapsulation efficiency. Moreover, the release rate of ETP from WGA-Lf-ETP-CASLNs reduced with increasing concentration of catanionic surfactant, and WGA-Lf-ETP-CASLNs at 12.5 μM of catanionic surfactant exhibited a feature of sustained release. The order in the viability of HBMECs was ETP-CASLNs ≅ Lf-ETP-CASLNs ≅ WGA-Lf-ETP-CASLNs > ETP. The variation in the transendothelial electrical resistance (TEER) and permeability of propidium iodide (PI) was negligible when the concentration of Lf increased. Furthermore, an increase in the concentration of WGA from 0.2 to 0.6 mg/mL insignificantly altered the TEER and permeability of PI. When the concentration of Lf increased from 2.5 to 7.5 μg/mL and the concentration of WGA increased from 2.5 to 5 μg/mL, the enhancement in the permeability of ETP was minor. However, 10 μg/mL of Lf promoted the permeability of ETP using Lf-ETP-CASLNs, and 5 and 10 μg/mL of WGA could considerably improve the permeability of ETP using WGA-Lf-ETP-CASLNs. The order in the efficacy of inhibiting U87MG cells was WGA-Lf-ETP-CASLNs > Lf-ETP-CASLNs > ETP-CASLNs > ETP. As a result, WGA-Lf-ETP-CASLNs reduced the TEER, enhanced the permeability of PI, induced a minor cytotoxicity to HBMECs, increased the permeability of ETP across the BBB, and improved the antiproliferative efficacy of U87MG cells. The grafting of WGA and Lf is crucial to control the medicinal property of ETP-CASLNs and WGA-Lf-ETP-CASLNs can be promising colloidal carriers in GBM management.

Keywords: catanionic solid lipid nanoparticle, etoposide, glioblastoma multiforme, lactoferrin, wheat germ agglutinin

Procedia PDF Downloads 237
433 Improvement Performances of the Supersonic Nozzles at High Temperature Type Minimum Length Nozzle

Authors: W. Hamaidia, T. Zebbiche

Abstract:

This paper presents the design of axisymmetric supersonic nozzles, in order to accelerate a supersonic flow to the desired Mach number and that having a small weight, in the same time gives a high thrust. The concerned nozzle gives a parallel and uniform flow at the exit section. The nozzle is divided into subsonic and supersonic regions. The supersonic portion is independent to the upstream conditions of the sonic line. The subsonic portion is used to give a sonic flow at the throat. In this case, nozzle gives a uniform and parallel flow at the exit section. It’s named by minimum length Nozzle. The study is done at high temperature, lower than the dissociation threshold of the molecules, in order to improve the aerodynamic performances. Our aim consists of improving the performances both by the increase of exit Mach number and the thrust coefficient and by reduction of the nozzle's mass. The variation of the specific heats with the temperature is considered. The design is made by the Method of Characteristics. The finite differences method with predictor-corrector algorithm is used to make the numerical resolution of the obtained nonlinear algebraic equations. The application is for air. All the obtained results depend on three parameters which are exit Mach number, the stagnation temperature, the chosen mesh in characteristics. A numerical simulation of nozzle through Computational Fluid Dynamics-FASTRAN was done to determine and to confirm the necessary design parameters.

Keywords: flux supersonic flow, axisymmetric minimum length nozzle, high temperature, method of characteristics, calorically imperfect gas, finite difference method, trust coefficient, mass of the nozzle, specific heat at constant pressure, air, error

Procedia PDF Downloads 151
432 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 255
431 Numerical investigation of Hydrodynamic and Parietal Heat Transfer to Bingham Fluid Agitated in a Vessel by Helical Ribbon Impeller

Authors: Mounir Baccar, Amel Gammoudi, Abdelhak Ayadi

Abstract:

The efficient mixing of highly viscous fluids is required for many industries such as food, polymers or paints production. The homogeneity is a challenging operation for this fluids type since they operate at low Reynolds number to reduce the required power of the used impellers. Particularly, close-clearance impellers, mainly helical ribbons, are chosen for highly viscous fluids agitated in laminar regime which is currently heated through vessel wall. Indeed, they are characterized by high shear strains closer to the vessel wall, which causes a disturbing thermal boundary layer and ensures the homogenization of the bulk volume by axial and radial vortices. The hydrodynamic and thermal behaviors of Newtonian fluids in vessels agitated by helical ribbon impellers, has been mostly studied by many researchers. However, rarely researchers investigated numerically the agitation of yield stress fluid by means of helical ribbon impellers. This paper aims to study the effect of the Double Helical Ribbon (DHR) stirrers on both the hydrodynamic and the thermal behaviors of yield stress fluids treated in a cylindrical vessel by means of numerical simulation approach. For this purpose, continuity, momentum, and thermal equations were solved by means of 3D finite volume technique. The effect of Oldroyd (Od) and Reynolds (Re) numbers on the power (Po) and Nusselt (Nu) numbers for the mentioned stirrer type have been studied. Also, the velocity and thermal fields, the dissipation function and the apparent viscosity have been presented in different (r-z) and (r-θ) planes.

Keywords: Bingham fluid, Hydrodynamic and thermal behavior, helical ribbon, mixing, numerical modelling

Procedia PDF Downloads 306
430 Retrospective Analysis Demonstrates No Difference in Percutaneous Native Renal Biopsy Adequacy Between Nephrologists and Radiologists in University Hospital Crosshouse

Authors: Nicole Harley, Mahmoud Eid, Abdurahman Tarmal, Vishal Dey

Abstract:

Histological sampling plays an integral role in the diagnostic process of renal diseases. Percutaneous native renal biopsy is typically performed under ultrasound guidance, with this service usually being provided by nephrologists. In some centers, there is a role for radiologists in performing renal biopsies. Previous comparative studies have demonstrated non-inferiority between outcomes of percutaneous native renal biopsies performed by nephrologists compared with radiologists. We sought to compare biopsy adequacy between nephrologists and radiologists in University Hospital Crosshouse. The online system SERPR (Scottish Electronic Renal Patient Record) contains information pertaining to patients who have undergone renal biopsies. An online search was performed to acquire a list of all patients who underwent renal biopsy between 2013 and 2020 in University Hospital Crosshouse. 355 native renal biopsies were performed in total across this 7-year period. A retrospective analysis was performed on these cases, with records and reports being assessed for: the total number of glomeruli obtained per biopsy, whether the number of glomeruli obtained was adequate for diagnosis, as per an internationally agreed standard, and whether a histological diagnosis was achieved. Nephrologists performed 43.9% of native renal biopsies (n=156) and radiologists performed 56.1% (n=199). The mean number of glomeruli obtained by nephrologists was 17.16+/-10.31. The mean number of glomeruli obtained by radiologists was 18.38+/-10.55. T-test demonstrated no statistically significant difference between specialties comparatively (p-value 0.277). Native renal biopsies are required to obtain at least 8 glomeruli to be diagnostic as per internationally agreed criteria. Nephrologists met these criteria in 88.5% of native renal biopsies (n=138) and radiologists met this criteria in 89.5% (n=178). T-test and Chi-squared analysis demonstrate there was no statistically significant difference between the specialties comparatively (p-value 0.663 and 0.922, respectively). Biopsies performed by nephrologists yielded tissue that was diagnostic in 91.0% (n=142) of sampling. Biopsies performed by radiologists yielded tissue that was diagnostic in 92.4% (n=184) of sampling. T-test and Chi-squared analysis demonstrate there was no statistically significant difference between the specialties comparatively (p-value 0.625 and 0.889, respectively). This project demonstrates that at University Hospital Crosshouse, there is no statistical difference between radiologists and nephrologists in terms of glomeruli acquisition or samples achieving a histological diagnosis. Given the non-inferiority between specialties demonstrated by previous studies and this project, this evidence could support the restructuring of services to allow more renal biopsies to be performed by renal services and allow reallocation of radiology department resources.

Keywords: biopsy, medical imaging, nephrology, radiology

Procedia PDF Downloads 84
429 Regression Analysis in Estimating Stream-Flow and the Effect of Hierarchical Clustering Analysis: A Case Study in Euphrates-Tigris Basin

Authors: Goksel Ezgi Guzey, Bihrat Onoz

Abstract:

The scarcity of streamflow gauging stations and the increasing effects of global warming cause designing water management systems to be very difficult. This study is a significant contribution to assessing regional regression models for estimating streamflow. In this study, simulated meteorological data was related to the observed streamflow data from 1971 to 2020 for 33 stream gauging stations of the Euphrates-Tigris Basin. Ordinary least squares regression was used to predict flow for 2020-2100 with the simulated meteorological data. CORDEX- EURO and CORDEX-MENA domains were used with 0.11 and 0.22 grids, respectively, to estimate climate conditions under certain climate scenarios. Twelve meteorological variables simulated by two regional climate models, RCA4 and RegCM4, were used as independent variables in the ordinary least squares regression, where the observed streamflow was the dependent variable. The variability of streamflow was then calculated with 5-6 meteorological variables and watershed characteristics such as area and height prior to the application. Of the regression analysis of 31 stream gauging stations' data, the stations were subjected to a clustering analysis, which grouped the stations in two clusters in terms of their hydrometeorological properties. Two streamflow equations were found for the two clusters of stream gauging stations for every domain and every regional climate model, which increased the efficiency of streamflow estimation by a range of 10-15% for all the models. This study underlines the importance of homogeneity of a region in estimating streamflow not only in terms of the geographical location but also in terms of the meteorological characteristics of that region.

Keywords: hydrology, streamflow estimation, climate change, hydrologic modeling, HBV, hydropower

Procedia PDF Downloads 129
428 A Green Optically Active Hydrogen and Oxygen Generation System Employing Terrestrial and Extra-Terrestrial Ultraviolet Solar Irradiance

Authors: H. Shahid

Abstract:

Due to Ozone layer depletion on earth, the incoming ultraviolet (UV) radiation is recorded at its high index levels such as 25 in South Peru (13.5° S, 3360 m a.s.l.) Also, the planning of human inhabitation on Mars is under discussion where UV radiations are quite high. The exposure to UV is health hazardous and is avoided by UV filters. On the other hand, artificial UV sources are in use for water thermolysis to generate Hydrogen and Oxygen, which are later used as fuels. This paper presents the utility of employing UVA (315-400nm) and UVB (280-315nm) electromagnetic radiation from the solar spectrum to design and implement an optically active, Hydrogen and Oxygen generation system via thermolysis of desalinated seawater. The proposed system finds its utility on earth and can be deployed in the future on Mars (UVB). In this system, by using Fresnel lens arrays as an optical filter and via active tracking, the ultraviolet light from the sun is concentrated and then allowed to fall on two sub-systems of the proposed system. The first sub-system generates electrical energy by using UV based tandem photovoltaic cells such as GaAs/GaInP/GaInAs/GaInAsP and the second elevates temperature of water to lower the electric potential required to electrolyze the water. An empirical analysis is performed at 30 atm and an electrical potential is observed to be the main controlling factor for the rate of production of Hydrogen and Oxygen and hence the operating point (Q-Point) of the proposed system. The hydrogen production rate in the case of the commercial system in static mode (650ᵒC, 0.6V) is taken as a reference. The silicon oxide electrolyzer cell (SOEC) is used in the proposed (UV) system for the Hydrogen and Oxygen production. To achieve the same amount of Hydrogen as in the case of the reference system, with minimum chamber operating temperature of 850ᵒC in static mode, the corresponding required electrical potential is calculated as 0.3V. However, practically, the Hydrogen production rate is observed to be low in comparison to the reference system at 850ᵒC at 0.3V. However, it has been shown empirically that the Hydrogen production can be enhanced and by raising the electrical potential to 0.45V. It increases the production rate to the same level as is of the reference system. Therefore, 850ᵒC and 0.45V are assigned as the Q-point of the proposed system which is actively stabilized via proportional integral derivative controllers which adjust the axial position of the lens arrays for both subsystems. The functionality of the controllers is based on maintaining the chamber fixed at 850ᵒC (minimum operating temperature) and 0.45V; Q-Point to realize the same Hydrogen production rate as-is for the reference system.

Keywords: hydrogen, oxygen, thermolysis, ultraviolet

Procedia PDF Downloads 133
427 Dynamics and Advection in a Vortex Parquet on the Plane

Authors: Filimonova Alexanra

Abstract:

Inviscid incompressible fluid flows are considered. The object of the study is a vortex parquet – a structure consisting of distributed vortex spots of different directions, occupying the entire plane. The main attention is paid to the study of advection processes of passive particles in the corresponding velocity field. The dynamics of the vortex structures is considered in a rectangular region under the assumption that periodic boundary conditions are imposed on the stream function. Numerical algorithms are based on the solution of the initial-boundary value problem for nonstationary Euler equations in terms of vorticity and stream function. For this, the spectral-vortex meshless method is used. It is based on the approximation of the stream function by the Fourier series cut and the approximation of the vorticity field by the least-squares method from its values in marker particles. A vortex configuration, consisting of four vortex patches is investigated. Results of a numerical study of the dynamics and interaction of the structure are presented. The influence of the patch radius and the relative position of positively and negatively directed patches on the processes of interaction and mixing is studied. The obtained results correspond to the following possible scenarios: the initial configuration does not change over time; the initial configuration forms a new structure, which is maintained for longer times; the initial configuration returns to its initial state after a certain period of time. The processes of mass transfer of vorticity by liquid particles on a plane were calculated and analyzed. The results of a numerical analysis of the particles dynamics and trajectories on the entire plane and the field of local Lyapunov exponents are presented.

Keywords: ideal fluid, meshless methods, vortex structures in liquids, vortex parquet.

Procedia PDF Downloads 64
426 A Rapid Prototyping Tool for Suspended Biofilm Growth Media

Authors: Erifyli Tsagkari, Stephanie Connelly, Zhaowei Liu, Andrew McBride, William Sloan

Abstract:

Biofilms play an essential role in treating water in biofiltration systems. The biofilm morphology and function are inextricably linked to the hydrodynamics of flow through a filter, and yet engineers rarely explicitly engineer this interaction. We develop a system that links computer simulation and 3-D printing to optimize and rapidly prototype filter media to optimize biofilm function with the hypothesis that biofilm function is intimately linked to the flow passing through the filter. A computational model that numerically solves the incompressible time-dependent Navier Stokes equations coupled to a model for biofilm growth and function is developed. The model is imbedded in an optimization algorithm that allows the model domain to adapt until criteria on biofilm functioning are met. This is applied to optimize the shape of filter media in a simple flow channel to promote biofilm formation. The computer code links directly to a 3-D printer, and this allows us to prototype the design rapidly. Its validity is tested in flow visualization experiments and by microscopy. As proof of concept, the code was constrained to explore a small range of potential filter media, where the medium acts as an obstacle in the flow that sheds a von Karman vortex street that was found to enhance the deposition of bacteria on surfaces downstream. The flow visualization and microscopy in the 3-D printed realization of the flow channel validated the predictions of the model and hence its potential as a design tool. Overall, it is shown that the combination of our computational model and the 3-D printing can be effectively used as a design tool to prototype filter media to optimize biofilm formation.

Keywords: biofilm, biofilter, computational model, von karman vortices, 3-D printing.

Procedia PDF Downloads 143
425 Modelling of a Biomechanical Vertebral System for Seat Ejection in Aircrafts Using Lumped Mass Approach

Authors: R. Unnikrishnan, K. Shankar

Abstract:

In the case of high-speed fighter aircrafts, seat ejection is designed mainly for the safety of the pilot in case of an emergency. Strong windblast due to the high velocity of flight is one main difficulty in clearing the tail of the aircraft. Excessive G-forces generated, immobilizes the pilot from escape. In most of the cases, seats are ejected out of the aircrafts by explosives or by rocket motors attached to the bottom of the seat. Ejection forces are primarily in the vertical direction with the objective of attaining the maximum possible velocity in a specified period of time. The safe ejection parameters are studied to estimate the critical time of ejection for various geometries and velocities of flight. An equivalent analytical 2-dimensional biomechanical model of the human spine has been modelled consisting of vertebrae and intervertebral discs with a lumped mass approach. The 24 vertebrae, which consists of the cervical, thoracic and lumbar regions, in addition to the head mass and the pelvis has been designed as 26 rigid structures and the intervertebral discs are assumed as 25 flexible joint structures. The rigid structures are modelled as mass elements and the flexible joints as spring and damper elements. Here, the motions are restricted only in the mid-sagittal plane to form a 26 degree of freedom system. The equations of motions are derived for translational movement of the spinal column. An ejection force with a linearly increasing acceleration profile is applied as vertical base excitation on to the pelvis. The dynamic vibrational response of each vertebra in time-domain is estimated.

Keywords: biomechanical model, lumped mass, seat ejection, vibrational response

Procedia PDF Downloads 231
424 A View from inside: Case Study of Social Economy Actors in Croatia

Authors: Drazen Simlesa, Jelena Pudjak, Anita Tonkovic Busljeta

Abstract:

Regarding social economy (SE), Croatia is, on general level, considered as ex-communist country with good tradition, bad performance in second part of 20th Century because of political control in the business sector, which has in transition period (1990-1999) became a problem of ignorance in public administration (policy level). Today, social economy in Croatia is trying to catch up with other EU states on all important levels of SE sector: legislative and institutional framework, financial infrastructure, education and capacity building, and visibility. All four are integral parts of Strategy for the Development of Social Entrepreneurship in the Republic of Croatia for the period of 2015 – 2020. Within iPRESENT project, funded by Croatian Science Foundation, we have mapped social economy actors and after many years there is a clear and up to date social economy base. At the ICSE 2016 we will present main outcomes and results of this process. In the second year of the project we conducted a field research across Croatia carried out 19 focus groups with most influential, innovative and inspirational social economy actors. We divided interview questions in four themes: laws on social economy and public policies, definition/ideology of social economy and cooperation on SE scene, the level of democracy and working conditions, motivation and existence of intrinsic values. The data that are gathered through focus group interviews has been analysed via qualitative data analysis software (Atlas ti.). Major finding that will be presented in ICSA 2016 are: Social economy actors are mostly unsatisfied with legislative and institutional framework in Croatia and consider it as unsupportive and confusing. Social economy actors consider SE to be in the line with WISE model and as a tool for community development. The SE actors that are more active express satisfaction with cooperation amongst SE actors and other partners and stakeholders, but the ones that are in more isolated conditions (spatially) express need for more cooperation and networking. Social economy actors expressed their praise for democratic atmosphere in their organisations and fair working conditions. And finally, they expressed high motivation to continue to work in the social economy and are dedicated to the concept, including even those that were at the beginning interested just in getting a quick job. It means that we can detect intrinsic values for employees in social economy organisations. This research enabled us to describe for the first time in Croatia the view from the inside, attitudes and opinion of employees of social economy organisations.

Keywords: employees, focus groups, mapping, social economy

Procedia PDF Downloads 254
423 Optimal Power Distribution and Power Trading Control among Loads in a Smart Grid Operated Industry

Authors: Vivek Upadhayay, Siddharth Deshmukh

Abstract:

In recent years utilization of renewable energy sources has increased majorly because of the increase in global warming concerns. Organization these days are generally operated by Micro grid or smart grid on a small level. Power optimization and optimal load tripping is possible in a smart grid based industry. In any plant or industry loads can be divided into different categories based on their importance to the plant and power requirement pattern in the working days. Coming up with an idea to divide loads in different such categories and providing different power management algorithm to each category of load can reduce the power cost and can come handy in balancing stability and reliability of power. An objective function is defined which is subjected to a variable that we are supposed to minimize. Constraint equations are formed taking difference between the power usages pattern of present day and same day of previous week. By considering the objectives of minimal load tripping and optimal power distribution the proposed problem formulation is a multi-object optimization problem. Through normalization of each objective function, the multi-objective optimization is transformed to single-objective optimization. As a result we are getting the optimized values of power required to each load for present day by use of the past values of the required power for the same day of last week. It is quite a demand response scheduling of power. These minimized values then will be distributed to each load through an algorithm used to optimize the power distribution at a greater depth. In case of power storage exceeding the power requirement, profit can be made by selling exceeding power to the main grid.

Keywords: power flow optimization, power trading enhancement, smart grid, multi-object optimization

Procedia PDF Downloads 525
422 Multi-Objective Optimization of the Thermal-Hydraulic Behavior for a Sodium Fast Reactor with a Gas Power Conversion System and a Loss of off-Site Power Simulation

Authors: Avent Grange, Frederic Bertrand, Jean-Baptiste Droin, Amandine Marrel, Jean-Henry Ferrasse, Olivier Boutin

Abstract:

CEA and its industrial partners are designing a gas Power Conversion System (PCS) based on a Brayton cycle for the ASTRID Sodium-cooled Fast Reactor. Investigations of control and regulation requirements to operate this PCS during operating, incidental and accidental transients are necessary to adapt core heat removal. To this aim, we developed a methodology to optimize the thermal-hydraulic behavior of the reactor during normal operations, incidents and accidents. This methodology consists of a multi-objective optimization for a specific sequence, whose aim is to increase component lifetime by reducing simultaneously several thermal stresses and to bring the reactor into a stable state. Furthermore, the multi-objective optimization complies with safety and operating constraints. Operating, incidental and accidental sequences use specific regulations to control the thermal-hydraulic reactor behavior, each of them is defined by a setpoint, a controller and an actuator. In the multi-objective problem, the parameters used to solve the optimization are the setpoints and the settings of the controllers associated with the regulations included in the sequence. In this way, the methodology allows designers to define an optimized and specific control strategy of the plant for the studied sequence and hence to adapt PCS piloting at its best. The multi-objective optimization is performed by evolutionary algorithms coupled to surrogate models built on variables computed by the thermal-hydraulic system code, CATHARE2. The methodology is applied to a loss of off-site power sequence. Three variables are controlled: the sodium outlet temperature of the sodium-gas heat exchanger, turbomachine rotational speed and water flow through the heat sink. These regulations are chosen in order to minimize thermal stresses on the gas-gas heat exchanger, on the sodium-gas heat exchanger and on the vessel. The main results of this work are optimal setpoints for the three regulations. Moreover, Proportional-Integral-Derivative (PID) control setting is considered and efficient actuators used in controls are chosen through sensitivity analysis results. Finally, the optimized regulation system and the reactor control procedure, provided by the optimization process, are verified through a direct CATHARE2 calculation.

Keywords: gas power conversion system, loss of off-site power, multi-objective optimization, regulation, sodium fast reactor, surrogate model

Procedia PDF Downloads 309
421 Modeling of Drug Distribution in the Human Vitreous

Authors: Judith Stein, Elfriede Friedmann

Abstract:

The injection of a drug into the vitreous body for the treatment of retinal diseases like wet aged-related macular degeneration (AMD) is the most common medical intervention worldwide. We develop mathematical models for drug transport in the vitreous body of a human eye to analyse the impact of different rheological models of the vitreous on drug distribution. In addition to the convection diffusion equation characterizing the drug spreading, we use porous media modeling for the healthy vitreous with a dense collagen network and include the steady permeating flow of the aqueous humor described by Darcy's law driven by a pressure drop. Additionally, the vitreous body in a healthy human eye behaves like a viscoelastic gel through the collagen fibers suspended in the network of hyaluronic acid and acts as a drug depot for the treatment of retinal diseases. In a completely liquefied vitreous, we couple the drug diffusion with the classical Navier-Stokes flow equations. We prove the global existence and uniqueness of the weak solution of the developed initial-boundary value problem describing the drug distribution in the healthy vitreous considering the permeating aqueous humor flow in the realistic three-dimensional setting. In particular, for the drug diffusion equation, results from the literature are extended from homogeneous Dirichlet boundary conditions to our mixed boundary conditions that describe the eye with the Galerkin's method using Cauchy-Schwarz inequality and trace theorem. Because there is only a small effective drug concentration range and higher concentrations may be toxic, the ability to model the drug transport could improve the therapy by considering patient individual differences and give a better understanding of the physiological and pathological processes in the vitreous.

Keywords: coupled PDE systems, drug diffusion, mixed boundary conditions, vitreous body

Procedia PDF Downloads 137
420 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings

Authors: Youlu Huang, Huanjun Jiang

Abstract:

A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.

Keywords: old buildings, adding story, seismic strengthening, seismic performance

Procedia PDF Downloads 123
419 On the Solution of Boundary Value Problems Blended with Hybrid Block Methods

Authors: Kizito Ugochukwu Nwajeri

Abstract:

This paper explores the application of hybrid block methods for solving boundary value problems (BVPs), which are prevalent in various fields such as science, engineering, and applied mathematics. Traditionally, numerical approaches such as finite difference and shooting methods, often encounter challenges related to stability and convergence, particularly in the context of complex and nonlinear BVPs. To address these challenges, we propose a hybrid block method that integrates features from both single-step and multi-step techniques. This method allows for the simultaneous computation of multiple solution points while maintaining high accuracy. Specifically, we employ a combination of polynomial interpolation and collocation strategies to derive a system of equations that captures the behavior of the solution across the entire domain. By directly incorporating boundary conditions into the formulation, we enhance the stability and convergence properties of the numerical solution. Furthermore, we introduce an adaptive step-size mechanism to optimize performance based on the local behavior of the solution. This adjustment allows the method to respond effectively to variations in solution behavior, improving both accuracy and computational efficiency. Numerical tests on a variety of boundary value problems demonstrate the effectiveness of the hybrid block methods. These tests showcase significant improvements in accuracy and computational efficiency compared to conventional methods, indicating that our approach is robust and versatile. The results suggest that this hybrid block method is suitable for a wide range of applications in real-world problems, offering a promising alternative to existing numerical techniques.

Keywords: hybrid block methods, boundary value problem, polynomial interpolation, adaptive step-size control, collocation methods

Procedia PDF Downloads 36
418 Probabilistic Analysis of Bearing Capacity of Isolated Footing using Monte Carlo Simulation

Authors: Sameer Jung Karki, Gokhan Saygili

Abstract:

The allowable bearing capacity of foundation systems is determined by applying a factor of safety to the ultimate bearing capacity. Conventional ultimate bearing capacity calculations routines are based on deterministic input parameters where the nonuniformity and inhomogeneity of soil and site properties are not accounted for. Hence, the laws of mathematics like probability calculus and statistical analysis cannot be directly applied to foundation engineering. It’s assumed that the Factor of Safety, typically as high as 3.0, incorporates the uncertainty of the input parameters. This factor of safety is estimated based on subjective judgement rather than objective facts. It is an ambiguous term. Hence, a probabilistic analysis of the bearing capacity of an isolated footing on a clayey soil is carried out by using the Monte Carlo Simulation method. This simulated model was compared with the traditional discrete model. It was found out that the bearing capacity of soil was found higher for the simulated model compared with the discrete model. This was verified by doing the sensitivity analysis. As the number of simulations was increased, there was a significant % increase of the bearing capacity compared with discrete bearing capacity. The bearing capacity values obtained by simulation was found to follow a normal distribution. While using the traditional value of Factor of safety 3, the allowable bearing capacity had lower probability (0.03717) of occurring in the field compared to a higher probability (0.15866), while using the simulation derived factor of safety of 1.5. This means the traditional factor of safety is giving us bearing capacity that is less likely occurring/available in the field. This shows the subjective nature of factor of safety, and hence probability method is suggested to address the variability of the input parameters in bearing capacity equations.

Keywords: bearing capacity, factor of safety, isolated footing, montecarlo simulation

Procedia PDF Downloads 187
417 Internet Protocol Television: A Research Study of Undergraduate Students Analyze the Effects

Authors: Sabri Serkan Gulluoglu

Abstract:

The study is aimed at examining the effects of internet marketing with IPTV on human beings. Internet marketing with IPTV is emerging as an integral part of business strategies in today’s technologically advanced world and the business activities all over the world are influences with the emergence of this modern marketing tool. As the population of the Internet and on-line users’ increases, new research issues have arisen concerning the demographics and psychographics of the on-line user and the opportunities for a product or service. In recent years, we have seen a tendency of various services converging to the ubiquitous Internet Protocol based networks. Besides traditional Internet applications such as web browsing, email, file transferring, and so forth, new applications have been developed to replace old communication networks. IPTV is one of the solutions. In the future, we expect a single network, the IP network, to provide services that have been carried by different networks today. For finding some important effects of a video based technology market web site on internet, we determine to apply a questionnaire on university students. Recently some researches shows that in Turkey the age of people 20 to 24 use internet when they buy some electronic devices such as cell phones, computers, etc. In questionnaire there are ten categorized questions to evaluate the effects of IPTV when shopping. There were selected 30 students who are filling the question form after watching an IPTV channel video for 10 minutes. This sample IPTV channel is “buy.com”, it look like an e-commerce site with an integrated IPTV channel on. The questionnaire for the survey is constructed by using the Likert scale that is a bipolar scaling method used to measure either positive or negative response to a statement (Likert, R) it is a common system that is used is the surveys. By following the Likert Scale “the respondents are asked to indicate their degree of agreement with the statement or any kind of subjective or objective evaluation of the statement. Traditionally a five-point scale is used under this methodology”. For this study also the five point scale system is used and the respondents were asked to express their opinions about the given statement by picking the answer from the given 5 options: “Strongly disagree, Disagree, Neither agree Nor disagree, Agree and Strongly agree”. These points were also rates from 1-5 (Strongly disagree, Disagree, Neither disagree Nor agree, Agree, Strongly agree). On the basis of the data gathered from the questionnaire some results are drawn in order to get the figures and graphical representation of the study results that can demonstrate the outcomes of the research clearly.

Keywords: IPTV, internet marketing, online, e-commerce, video based technology

Procedia PDF Downloads 241
416 Fluidised Bed Gasification of Multiple Agricultural Biomass-Derived Briquettes

Authors: Rukayya Ibrahim Muazu, Aiduan Li Borrion, Julia A. Stegemann

Abstract:

Biomass briquette gasification is regarded as a promising route for efficient briquette use in energy generation, fuels and other useful chemicals, however, previous research work has focused on briquette gasification in fixed bed gasifiers such as updraft and downdraft gasifiers. Fluidised bed gasifier has the potential to be effectively sized for medium or large scale. This study investigated the use of fuel briquettes produced from blends of rice husks and corn cobs biomass residues, in a bubbling fluidised bed gasifier. The study adopted a combination of numerical equations and Aspen Plus simulation software to predict the product gas (syngas) composition based on briquette's density and biomass composition (blend ratio of rice husks to corn cobs). The Aspen Plus model was based on an experimentally validated model from the literature. The results based on a briquette size of 32 mm diameter and relaxed density range of 500 to 650 kg/m3 indicated that fluidisation air required in the gasifier increased with an increase in briquette density, and the fluidisation air showed to be the controlling factor compared with the actual air required for gasification of the biomass briquettes. The mass flowrate of CO2 in the predicted syngas composition, increased with an increase in the air flow rate, while CO production decreased and H2 was almost constant. The H2/CO ratio for various blends of rice husks and corn cobs did not significantly change at the designed process air, but a significant difference of 1.0 for H2/CO ratio was observed at higher air flow rate, and between 10/90 to 90/10 blend ratio of rice husks to corn cobs. This implies the need for further understanding of biomass variability and hydrodynamic parameters on syngas composition in biomass briquette gasification.

Keywords: aspen plus, briquettes, fluidised bed, gasification, syngas

Procedia PDF Downloads 458
415 Numerical Modeling of Air Shock Wave Generated by Explosive Detonation and Dynamic Response of Structures

Authors: Michał Lidner, Zbigniew SzcześNiak

Abstract:

The ability to estimate blast load overpressure properly plays an important role in safety design of buildings. The issue of studying of blast loading on structural elements has been explored for many years. However, in many literature reports shock wave overpressure is estimated with simplified triangular or exponential distribution in time. This indicates some errors when comparing real and numerical reaction of elements. Nonetheless, it is possible to further improve setting similar to the real blast load overpressure function versus time. The paper presents a method of numerical analysis of the phenomenon of the air shock wave propagation. It uses Finite Volume Method and takes into account energy losses due to a heat transfer with respect to an adiabatic process rule. A system of three equations (conservation of mass, momentum and energy) describes the flow of a volume of gaseous medium in the area remote from building compartments, which can inhibit the movement of gas. For validation three cases of a shock wave flow were analyzed: a free field explosion, an explosion inside a steel insusceptible tube (the 1D case) and an explosion inside insusceptible cube (the 3D case). The results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied. Finally, an overall good convergence of numerical results with experiments was achieved. Also the most important parameters were well reflected. Additionally analyses of dynamic response of one of considered structural element were made.

Keywords: adiabatic process, air shock wave, explosive, finite volume method

Procedia PDF Downloads 193
414 When the Lights Go Down in the Delivery Room: Lessons From a Ransomware Attack

Authors: Rinat Gabbay-Benziv, Merav Ben-Natan, Ariel Roguin, Benyamine Abbou, Anna Ofir, Adi Klein, Dikla Dahan-Shriki, Mordechai Hallak, Boris Kessel, Mickey Dudkiewicz

Abstract:

Introduction: Over recent decades, technology has become integral to healthcare, with electronic health records and advanced medical equipment now standard. However, this reliance has made healthcare systems increasingly vulnerable to ransomware attacks. On October 13, 2021, Hillel Yaffe Medical Center experienced a severe ransomware attack that disrupted all IT systems, including electronic health records, laboratory services, and staff communications. The attack, carried out by the group DeepBlueMagic, utilized advanced encryption to lock the hospital's systems and demanded a ransom. This incident caused significant operational and patient care challenges, particularly impacting the obstetrics department. Objective: The objective is to describe the challenges facing the obstetric division following a cyberattack and discuss ways of preparing for and overcoming another one. Methods: A retrospective descriptive study was conducted in a mid-sized medical center. Division activities, including the number of deliveries, cesarean sections, emergency room visits, admissions, maternal-fetal medicine department occupancy, and ambulatory encounters, from 2 weeks before the attack to 8 weeks following it (a total of 11 weeks), were compared with the retrospective period in 2019 (pre-COVID-19). In addition, we present the challenges and adaptation measures taken at the division and hospital levels leading up to the resumption of full division activity. Results: On the day of the cyberattack, critical decisions were made. The media announced the event, calling on patients not to come to our hospital. Also, all elective activities other than cesarean deliveries were stopped. The number of deliveries, admissions, and both emergency room and ambulatory clinic visits decreased by 5%–10% overall for 11 weeks, reflecting the decrease in division activity. Nevertheless, in all stations, there were sufficient activities and adaptation measures to ensure patient safety, decision-making, and workflow of patients were accounted for. Conclusions: The risk of ransomware cyberattacks is growing. Healthcare systems at all levels should recognize this threat and have protocols for dealing with them once they occur.

Keywords: ransomware attack, healthcare cybersecurity, obstetrics challenges, IT system disruption

Procedia PDF Downloads 28
413 Free Will and Compatibilism in Decision Theory: A Solution to Newcomb’s Paradox

Authors: Sally Heyeon Hwang

Abstract:

Within decision theory, there are normative principles that dictate how one should act in addition to empirical theories of actual behavior. As a normative guide to one’s actual behavior, evidential or causal decision-theoretic equations allow one to identify outcomes with maximal utility values. The choice that each person makes, however, will, of course, differ according to varying assignments of weight and probability values. Regarding these different choices, it remains a subject of considerable philosophical controversy whether individual subjects have the capacity to exercise free will with respect to the assignment of probabilities, or whether instead the assignment is in some way constrained. A version of this question is given a precise form in Richard Jeffrey’s assumption that free will is necessary for Newcomb’s paradox to count as a decision problem. This paper will argue, against Jeffrey, that decision theory does not require the assumption of libertarian freedom. One of the hallmarks of decision-making is its application across a wide variety of contexts; the implications of a background assumption of free will is similarly varied. One constant across the contexts of decision is that there are always at least two levels of choice for a given agent, depending on the degree of prior constraint. Within the context of Newcomb’s problem, when the predictor is attempting to guess the choice the agent will make, he or she is analyzing the determined aspects of the agent such as past characteristics, experiences, and knowledge. On the other hand, as David Lewis’ backtracking argument concerning the relationship between past and present events brings to light, there are similarly varied ways in which the past can actually be dependent on the present. One implication of this argument is that even in deterministic settings, an agent can have more free will than it may seem. This paper will thus argue against the view that a stable background assumption of free will or determinism in decision theory is necessary, arguing instead for a compatibilist decision theory yielding a novel treatment of Newcomb’s problem.

Keywords: decision theory, compatibilism, free will, Newcomb’s problem

Procedia PDF Downloads 322
412 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 166
411 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm

Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi

Abstract:

To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.

Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm

Procedia PDF Downloads 237
410 Vibration Control of a Horizontally Supported Rotor System by Using a Radial Active Magnetic Bearing

Authors: Vishnu A., Ashesh Saha

Abstract:

The operation of high-speed rotating machinery in industries is accompanied by rotor vibrations due to many factors. One of the primary instability mechanisms in a rotor system is the centrifugal force induced due to the eccentricity of the center of mass away from the center of rotation. These unwanted vibrations may lead to catastrophic fatigue failure. So, there is a need to control these rotor vibrations. In this work, control of rotor vibrations by using a 4-pole Radial Active Magnetic Bearing (RAMB) as an actuator is analysed. A continuous rotor system model is considered for the analysis. Several important factors, like the gyroscopic effect and rotary inertia of the shaft and disc, are incorporated into this model. The large deflection of the shaft and the restriction to axial motion of the shaft at the bearings result in nonlinearities in the system governing equation. The rotor system is modeled in such a way that the system dynamics can be related to the geometric and material properties of the shaft and disc. The mathematical model of the rotor system is developed by incorporating the control forces generated by the RAMB. A simple PD controller is used for the attenuation of system vibrations. An analytical expression for the amplitude and phase equations is derived using the Method of Multiple Scales (MMS). Analytical results are verified with the numerical results obtained using an ‘ode’ solver in-built into MATLAB Software. The control force is found to be effective in attenuating the system vibrations. The multi-valued solutions leading to the jump phenomenon are also eliminated with a proper choice of control gains. Most interestingly, the shape of the backbone curves can also be altered for certain values of control parameters.

Keywords: rotor dynamics, continuous rotor system model, active magnetic bearing, PD controller, method of multiple scales, backbone curve

Procedia PDF Downloads 80
409 [Keynote Talk]: Three Dimensional Finite Element Analysis of Functionally Graded Radiation Shielding Nanoengineered Sandwich Composites

Authors: Nasim Abuali Galehdari, Thomas J. Ryan, Ajit D. Kelkar

Abstract:

In recent years, nanotechnology has played an important role in the design of an efficient radiation shielding polymeric composites. It is well known that, high loading of nanomaterials with radiation absorption properties can enhance the radiation attenuation efficiency of shielding structures. However, due to difficulties in dispersion of nanomaterials into polymer matrices, there has been a limitation in higher loading percentages of nanoparticles in the polymer matrix. Therefore, the objective of the present work is to provide a methodology to fabricate and then to characterize the functionally graded radiation shielding structures, which can provide an efficient radiation absorption property along with good structural integrity. Sandwich structures composed of Ultra High Molecular Weight Polyethylene (UHMWPE) fabric as face sheets and functionally graded epoxy nanocomposite as core material were fabricated. A method to fabricate a functionally graded core panel with controllable gradient dispersion of nanoparticles is discussed. In order to optimize the design of functionally graded sandwich composites and to analyze the stress distribution throughout the sandwich composite thickness, a finite element method was used. The sandwich panels were discretized using 3-Dimensional 8 nodded brick elements. Classical laminate analysis in conjunction with simplified micromechanics equations were used to obtain the properties of the face sheets. The presented finite element model would provide insight into deformation and damage mechanics of the functionally graded sandwich composites from the structural point of view.

Keywords: nanotechnology, functionally graded material, radiation shielding, sandwich composites, finite element method

Procedia PDF Downloads 469
408 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete

Authors: Farzad Danaei, Yilmaz Akkaya

Abstract:

In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.

Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient

Procedia PDF Downloads 79
407 Design and Optimization of a Small Hydraulic Propeller Turbine

Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink

Abstract:

A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.

Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design

Procedia PDF Downloads 150