Search results for: finite element modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5310

Search results for: finite element modelling

390 Unlocking New Room of Production in Brown Field; ‎Integration of Geological Data Conditioned 3D Reservoir ‎Modelling of Lower Senonian Matulla Formation, RAS ‎Budran Field, East Central Gulf of Suez, Egypt

Authors: Nader Mohamed

Abstract:

The Late Cretaceous deposits are well developed through-out Egypt. This is due to a ‎transgression phase associated with the subsidence caused by the neo-Tethyan rift event that ‎took place across the northern margin of Africa, resulting in a period of dominantly marine ‎deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, ‎Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided ‎into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, ‎the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian ‎Matulla formations are the most important clastic sequence in the Nezzazat Group because they ‎provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis ‎on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic ‎surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed ‎Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras ‎Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with ‎outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and ‎subdivide the Matulla formation into three units. The lower unit is believed to be the main ‎reservoir where it consists mainly of sands with shale and sandy carbonates, while the other ‎units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an ‎effective technique that assists in reservoir management as decisions concerning development ‎and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as ‎accurately as possible in order to better evaluate, calculate the reserves and to determine the ‎most effective way of recovering as much of the petroleum economically as possible. All ‎available data on Matulla formation are used to build the reservoir structure model, lithofacies, ‎porosity, permeability and water saturation models which are the main parameters that describe ‎the reservoirs and provide information on effective evaluation of the need to develop the oil ‎potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of ‎geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and ‎facies environment interpretation which helped in defining the nature of deposition of Matulla ‎formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the ‎spatial distribution of property and in addition evaluating the unlocked new reservoir areas of ‎Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This ‎study led to adding a new room of production and additional reserves to Ras Budran field. ‎

Keywords: geology, oil and gas, geoscience, sequence stratigraphy

Procedia PDF Downloads 93
389 Three-Dimensional Fluid-Structure-Thermal Coupling Dynamics Simulation Model of a Gas-Filled Fluid-Resistance Damper and Experimental Verification

Authors: Wenxue Xu

Abstract:

Fluid resistance damper is an important damping element to attenuate vehicle vibration. It converts vibration energy into thermal energy dissipation through oil throttling. It is a typical fluid-solid-heat coupling problem. A complete three-dimensional flow-structure-thermal coupling dynamics simulation model of a gas-filled fluid-resistance damper was established. The flow-condition-based interpolation (FCBI) method and direct coupling calculation method, the unit's FCBI-C fluid numerical analysis method and iterative coupling calculation method are used to achieve the damper dynamic response of the piston rod under sinusoidal excitation; the air chamber inflation pressure, spring compression characteristics, constant flow passage cross-sectional area and oil parameters, etc. The system parameters, excitation frequency, and amplitude and other excitation parameters are analyzed and compared in detail for the effects of differential pressure characteristics, velocity characteristics, flow characteristics and dynamic response of valve opening, floating piston response and piston rod output force characteristics. Experiments were carried out on some simulation analysis conditions. The results show that the node-based FCBI (flow-condition-based interpolation) fluid numerical analysis method and direct coupling calculation method can better guarantee the conservation of flow field calculation, and the calculation step is larger, but the memory is also larger; if the chamber inflation pressure is too low, the damper will become cavitation. The inflation pressure will cause the speed characteristic hysteresis to increase, and the sealing requirements are too strict. The spring compression characteristics have a great influence on the damping characteristics of the damper, and reasonable damping characteristic needs to properly design the spring compression characteristics; the larger the cross-sectional area of the constant flow channel, the smaller the maximum output force, but the more stable when the valve plate is opening.

Keywords: damper, fluid-structure-thermal coupling, heat generation, heat transfer

Procedia PDF Downloads 133
388 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 137
387 Environmental Factors and Executive Functions of Children in 5-Year-Old Kindergarten

Authors: Stephanie Duval

Abstract:

The concept of educational success, combined with the overall development of the child in kindergarten, is at the center of current interests, both in research and in the environments responsible for the education of young children. In order to promote it, researchers emphasize the importance of studying the executive functions [EF] of children in preschool education. More precisely, the EFs, which refers to working memory [WM], inhibition, mental flexibility and planning, would be the pivotal element of the child’s educational success. In order to support the EFs of the child, and even his educational success, the quality of the environments is beginning to be explored more and more. The question that arises now is how to promote EFs for young children in the educational environment, in order to support their educational success? The objective of this study is to investigate the link between the quality of interactions in 5-year-old kindergarten and child’s EFs. The sample consists of 118 children (70 girls, 48 boys) in 12 classes. The quality of the interactions is observed from the Classroom Assessment Scoring System [CLASS], and the EFs (i.e., working memory, inhibition, cognitive flexibility, and planning) are measured with administered tests. The hypothesis of this study was that the quality of teacher-child interactions in preschool education, as measured by the CLASS, was associated with the child’s EFs. The results revealed that the quality of emotional support offered by adults in kindergarten, included in the CLASS tool, was positively and significantly related to WM and inhibition skills. The results also suggest that WM is a key skill in the development of EFs, which may be associated with the educational success of the child. However, this hypothesis remains to be clarified, as is the link with educational success. In addition, results showed that factors associated to the family (ex. parents’ income) moderate the relationship between the domain ‘instructional support’ of the CLASS (ex. concept development) and child’s WM skills. These data suggest a moderating effect related to family characteristics in the link between ‘quality of classroom interactions’ and ‘EFs’. This project proposes, as a future avenue, to check the distinctive effect of different environments (familial and educational) on the child’s EFs. More specifically, future study could examine the influence of the educational environment on EF skills, as well as whether or not there is a moderating effect of the family environment (ex. parents' income) on the link between the quality of the interactions in the classroom and the EFs of the children, as anticipated by this research.

Keywords: executive functions [EFs], environmental factors, quality of interactions, preschool education

Procedia PDF Downloads 356
386 Optimization of Biomass Production and Lipid Formation from Chlorococcum sp. Cultivation on Dairy and Paper-Pulp Wastewater

Authors: Emmanuel C. Ngerem

Abstract:

The ever-increasing depletion of the dominant global form of energy (fossil fuels) calls for the development of sustainable and green alternative energy sources such as bioethanol, biohydrogen, and biodiesel. The production of the major biofuels relies on biomass feedstocks that are mainly derived from edible food crops and some inedible plants. One suitable feedstock with great potential as raw material for biofuel production is microalgal biomass. Despite the tremendous attributes of microalgae as a source of biofuel, their cultivation requires huge volumes of freshwater, thus posing a serious threat to commercial-scale production and utilization of algal biomass. In this study, a multi-media wastewater mixture for microalgae growth was formulated and optimized. Moreover, the obtained microalgae biomass was pre-treated to reduce sugar recovery and was compared with previous studies on microalgae biomass pre-treatment. The formulated and optimized mixed wastewater media for biomass and lipid accumulation was established using the simplex lattice mixture design. Based on the superposition approach of the potential results, numerical optimization was conducted, followed by the analysis of biomass concentration and lipid accumulation. The coefficients of regression (R²) of 0.91 and 0.98 were obtained for biomass concentration and lipid accumulation models, respectively. The developed optimization model predicted optimal biomass concentration and lipid accumulation of 1.17 g/L and 0.39 g/g, respectively. It suggested 64.69% dairy wastewater (DWW) and 35.31% paper and pulp wastewater (PWW) mixture for biomass concentration, 34.21% DWW, and 65.79% PWW for lipid accumulation. Experimental validation generated 0.94 g/L and 0.39 g/g of biomass concentration and lipid accumulation, respectively. The obtained microalgae biomass was pre-treated, enzymatically hydrolysed, and subsequently assessed for reducing sugars. The optimization of microwave pre-treatment of Chlorococcum sp. was achieved using response surface methodology (RSM). Microwave power (100 – 700 W), pre-treatment time (1 – 7 min), and acid-liquid ratio (1 – 5%) were selected as independent variables for RSM optimization. The optimum conditions were achieved at microwave power, pre-treatment time, and acid-liquid ratio of 700 W, 7 min, and 32.33:1, respectively. These conditions provided the highest amount of reducing sugars at 10.73 g/L. Process optimization predicted reducing sugar yields of 11.14 g/L on microwave-assisted pre-treatment of 2.52% HCl for 4.06 min at 700 watts. Experimental validation yielded reducing sugars of 15.67 g/L. These findings demonstrate that dairy wastewater and paper and pulp wastewater that could pose a serious environmental nuisance. They could be blended to form a suitable microalgae growth media, consolidating the potency of microalgae as a viable feedstock for fermentable sugars. Also, the outcome of this study supports the microalgal wastewater biorefinery concept, where wastewater remediation is coupled with bioenergy production.

Keywords: wastewater cultivation, mixture design, lipid, biomass, nutrient removal, microwave, Chlorococcum, raceway pond, fermentable sugar, modelling, optimization

Procedia PDF Downloads 10
385 Exploring Nature and Pattern of Mentoring Practices: A Study on Mentees' Perspectives

Authors: Nahid Parween Anwar, Sadia Muzaffar Bhutta, Takbir Ali

Abstract:

Mentoring is a structured activity which is designed to facilitate engagement between mentor and mentee to enhance mentee’s professional capability as an effective teacher. Both mentor and mentee are important elements of the ‘mentoring equation’ and play important roles in nourishing this dynamic, collaborative and reciprocal relationship. Cluster-Based Mentoring Programme (CBMP) provides an indigenous example of a project which focused on development of primary school teachers in selected clusters with a particular focus on their classroom practice. A study was designed to examine the efficacy of CBMP as part of Strengthening Teacher Education in Pakistan (STEP) project. This paper presents results of one of the components of this study. As part of the larger study, a cross-sectional survey was employed to explore nature and patterns of mentoring process from mentees’ perspectives in the selected districts of Sindh and Balochistan. This paper focuses on the results of the study related to the question: What are mentees’ perceptions of their mentors’ support for enhancing their classroom practice during mentoring process? Data were collected from mentees (n=1148) using a 5-point scale -‘Mentoring for Effective Primary Teaching’ (MEPT). MEPT focuses on seven factors of mentoring: personal attributes, pedagogical knowledge, modelling, feedback, system requirement, development and use of material, and gender equality. Data were analysed using SPSS 20. Mentees perceptions of mentoring practice of their mentors were summarized using mean and standard deviation. Results showed that mean scale scores on mentees’ perceptions of their mentors’ practices fell between 3.58 (system requirement) and 4.55 (personal attributes). Mentees’ perceives personal attribute of the mentor as the most significant factor (M=4.55) towards streamlining mentoring process by building good relationship between mentor and mentees. Furthermore, mentees have shared positive views about their mentors efforts towards promoting gender impartiality (M=4.54) during workshop and follow up visit. Contrary to this, mentees felt that more could have been done by their mentors in sharing knowledge about system requirement (e.g. school policies, national curriculum). Furthermore, some of the aspects in high scoring factors were highlighted by the mentees as areas for further improvement (e.g. assistance in timetabling, written feedback, encouragement to develop learning corners). Mentees’ perceptions of their mentors’ practices may assist in determining mentoring needs. The results may prove useful for the professional development programme for the mentors and mentees for specific mentoring programme in order to enhance practices in primary classrooms in Pakistan. Results would contribute into the body of much-needed knowledge from developing context.

Keywords: cluster-based mentoring programme, mentoring for effective primary teaching (MEPT), professional development, survey

Procedia PDF Downloads 222
384 Advancing Entrepreneurial Knowledge Through Re-Engineering Social Studies Education

Authors: Chukwuka Justus Iwegbu, Monye Christopher Prayer

Abstract:

Propeller aircraft engines, and more generally engines with a large rotating part (turboprops, high bypass ratio turbojets, etc.) are widely used in the industry and are subject to numerous developments in order to reduce their fuel consumption. In this context, unconventional architectures such as open rotors or distributed propulsion appear, and it is necessary to consider the influence of these systems on the aircraft's stability in flight. Indeed, the tendency to lengthen the blades and wings on which these propulsion devices are fixed increases their flexibility and accentuates the risk of whirl flutter. This phenomenon of aeroelastic instability is due to the precession movement of the axis of rotation of the propeller, which changes the angle of attack of the flow on the blades and creates unsteady aerodynamic forces and moments that can amplify the motion and make it unstable. The whirl flutter instability can ultimately lead to the destruction of the engine. We note the existence of a critical speed of the incident flow. If the flow velocity is lower than this value, the motion is damped and the system is stable, whereas beyond this value, the flow provides energy to the system (negative damping) and the motion becomes unstable. A simple model of whirl flutter is based on the work of Houbolt & Reed who proposed an analytical expression of the aerodynamic load on a rigid blade propeller whose axis orientation suffers small perturbations. Their work considered a propeller subjected to pitch and yaw movements, a flow undisturbed by the blades and a propeller not generating any thrust in the absence of precession. The unsteady aerodynamic forces were then obtained using the thin airfoil theory and the strip theory. In the present study, the unsteady aerodynamic loads are expressed for a general movement of the propeller (not only pitch and yaw). The acceleration and rotation of the flow by the propeller are modeled using a Blade Element Momentum Theory (BEMT) approach, which also enable to take into account the thrust generated by the blades. It appears that the thrust has a stabilizing effect. The aerodynamic model is further developed using Theodorsen theory. A reduced order model of the aerodynamic load is finally constructed in order to perform linear stability analysis.

Keywords: advancing, entrepreneurial, knowledge, industralization

Procedia PDF Downloads 76
383 Automated, Objective Assessment of Pilot Performance in Simulated Environment

Authors: Maciej Zasuwa, Grzegorz Ptasinski, Antoni Kopyt

Abstract:

Nowadays flight simulators offer tremendous possibilities for safe and cost-effective pilot training, by utilization of powerful, computational tools. Due to technology outpacing methodology, vast majority of training related work is done by human instructors. It makes assessment not efficient, and vulnerable to instructors’ subjectivity. The research presents an Objective Assessment Tool (gOAT) developed at the Warsaw University of Technology, and tested on SW-4 helicopter flight simulator. The tool uses database of the predefined manoeuvres, defined and integrated to the virtual environment. These were implemented, basing on Aeronautical Design Standard Performance Specification Handling Qualities Requirements for Military Rotorcraft (ADS-33), with predefined Mission-Task-Elements (MTEs). The core element of the gOAT enhanced algorithm that provides instructor a new set of information. In details, a set of objective flight parameters fused with report about psychophysical state of the pilot. While the pilot performs the task, the gOAT system automatically calculates performance using the embedded algorithms, data registered by the simulator software (position, orientation, velocity, etc.), as well as measurements of physiological changes of pilot’s psychophysiological state (temperature, sweating, heart rate). Complete set of measurements is presented on-line to instructor’s station and shown in dedicated graphical interface. The presented tool is based on open source solutions, and flexible for editing. Additional manoeuvres can be easily added using guide developed by authors, and MTEs can be changed by instructor even during an exercise. Algorithm and measurements used allow not only to implement basic stress level measurements, but also to reduce instructor’s workload significantly. Tool developed can be used for training purpose, as well as periodical checks of the aircrew. Flexibility and ease of modifications allow the further development to be wide ranged, and the tool to be customized. Depending on simulation purpose, gOAT can be adjusted to support simulator of aircraft, helicopter, or unmanned aerial vehicle (UAV).

Keywords: automated assessment, flight simulator, human factors, pilot training

Procedia PDF Downloads 136
382 Biology and Life Fertility of the Cabbage Aphid, Brevicoryne brassicae (L) on Cauliflower Cultivars

Authors: Mandeep Kaur, K. C. Sharma, P. L. Sharma, R. S. Chandel

Abstract:

Cauliflower is an important vegetable crop grown throughout the world and is attacked by a large number of insect pests at various stages of the crop growth. Amongst them, the cabbage aphid, Brevicoryne brassicae (Linnaeus) (Hemiptera: Aphididae) is an important insect pest. Continued feeding by both nymphs and adults of this aphid causes yellowing, wilting and stunting of plants. Amongst various management practices, the use of resistant cultivars is important and can be an effective method of reducing the population of this aphid. So it is imperative to know the complete record on various biological parameters and life table on specific cultivars. The biology and life fertility of the cabbage aphid were studied on five cauliflower cultivars viz. Megha, Shweta, K-1, PSB-1 and PSBK-25 under controlled temperature conditions of 20 ± 2°C, 70 ± 5% relative humidity and 16:8 h (Light: Dark) photoperiods. For studying biology; apterous viviparous adults were picked up from the laboratory culture of all five cauliflower cultivars after rearing them at least for two generations and placed individually on the desired plants of cauliflower cultivars grown in pots with ten replicates of each. Daily record on the duration of nymphal period, adult longevity, mortality in each stage and the total number of progeny produced per female was made. This biological data were further used to construct life fertility table on each cultivar. Statistical analysis showed that there was a significant difference ( P  < 0.05) between the different growth stages and the mean number of laid nymphs. The maximum and minimum growth periods were observed on Shweta and Megha (at par with K-1) cultivars, respectively. The maximum number of nymphs were laid on Shweta cultivar (26.40 nymphs per female) and minimum on Megha (at par with K-1) cultivar (15.20 nymphs per female). The true intrinsic rate of increase (rm) was found to be maximum on Shweta (0.233 nymphs/female/day) followed by PSB K-25 (0.207 nymphs/female/day), PSB-1 (0.203 nymphs/female/day), Megha (0.166 nymphs/female/day) and K-1 (0.153 nymphs/female/day). The finite rate of natural increase (λ) was also found to be in the order: K-1 < Megha < PSB-1 < PSBK-25 < Shweta whereas the doubling time (DT) was in the order of K-1 >Megha> PSB-1 >PSBk-25> Shweta. The aphids reared on the K-1 cultivar had the lowest values of rm & λ and the highest value of DT whereas on Shweta cultivar the values of rm & λ were the highest and the lowest value of DT. So on the basis of these studies, K-1 cultivar was found to be the least suitable and the Shweta cultivar was the most suitable for the cabbage aphid population growth. Although the cauliflower cultivars used in different parts of the world may be different yet the results of the present studies indicated that the application of cultivars affecting multiplication rate and reproductive parameters could be a good solution for the management of the cabbage aphid.

Keywords: biology, cauliflower, cultivars, fertility

Procedia PDF Downloads 171
381 Non-Newtonian Fluid Flow Simulation for a Vertical Plate and a Square Cylinder Pair

Authors: Anamika Paul, Sudipto Sarkar

Abstract:

The flow behaviour of non-Newtonian fluid is quite complicated, although both the pseudoplastic (n < 1, n being the power index) and dilatant (n > 1) fluids under this category are used immensely in chemical and process industries. A limited research work is carried out for flow over a bluff body in non-Newtonian flow environment. In the present numerical simulation we control the vortices of a square cylinder by placing an upstream vertical splitter plate for pseudoplastic (n=0.8), Newtonian (n=1) and dilatant (n=1.2) fluids. The position of the upstream plate is also varied to calculate the critical distance between the plate and cylinder, below which the cylinder vortex shedding suppresses. Here the Reynolds number is considered as Re = 150 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid), which comes under laminar periodic vortex shedding regime. The vertical plate is having a dimension of 0.5a × 0.05a and it is placed at the cylinder centre-line. Gambit 2.2.30 is used to construct the flow domain and to impose the boundary conditions. In detail, we imposed velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. The unsteady 2-D Navier Stokes equations in fully conservative form are then discretized in second-order spatial and first-order temporal form. These discretized equations are then solved by Ansys Fluent 14.5 implementing SIMPLE algorithm written in finite volume method. Here, fine meshing is used surrounding the plate and cylinder. Away from the cylinder, the grids are slowly stretched out in all directions. To get an account of mesh quality, a total of 297 × 208 grid points are used for G/a = 3 (G being the gap between the plate and cylinder) in the streamwise and flow-normal directions respectively after a grid independent study. The computed mean flow quantities obtained from Newtonian flow are agreed well with the available literatures. The results are depicted with the help of instantaneous and time-averaged flow fields. Qualitative and quantitative noteworthy differences are obtained in the flow field with the changes in rheology of fluid. Also, aerodynamic forces and vortex shedding frequencies differ with the gap-ratio and power index of the fluid. We can conclude from the present simulation that fluent is capable to capture the vortex dynamics of unsteady laminar flow regime even in the non-Newtonian flow environment.

Keywords: CFD, critical gap-ratio, splitter plate, wake-wake interactions, dilatant, pseudoplastic

Procedia PDF Downloads 106
380 Performance of a Sailing Vessel with a Solid Wing Sail Compared to a Traditional Sail

Authors: William Waddington, M. Jahir Rizvi

Abstract:

Sail used to propel a vessel functions in a similar way to an aircraft wing. Traditionally, cloth and ropes were used to produce sails. However, there is one major problem with traditional sail design, the increase in turbulence and flow separation when compared to that of an aircraft wing with the same camber. This has led to the development of the solid wing sail focusing mainly on the sail shape. Traditional cloth sails are manufactured as a single element whereas solid wing sail is made of two segments. To the authors’ best knowledge, the phenomena behind the performances of this type of sail at various angles of wind direction with respect to a sailing vessel’s direction (known as the angle of attack) is still an area of mystery. Hence, in this study, the thrusts of a sailing vessel produced by wing sails constructed with various angles (22°, 24°, 26° and 28°) between the two segments have been compared to that of a traditional cloth sail made of carbon-fiber material. The reason for using carbon-fiber material is to achieve the correct and the exact shape of a commercially available mainsail. NACA 0024 and NACA 0016 foils have been used to generate two-segment wing sail shape which incorporates a flap between the first and the second segments. Both the two-dimensional and the three-dimensional sail models designed in commercial CAD software Solidworks have been analyzed through Computational Fluid Dynamics (CFD) techniques using Ansys CFX considering an apparent wind speed of 20.55 knots with an apparent wind angle of 31°. The results indicate that the thrust from traditional sail increases from 8.18 N to 8.26 N when the angle of attack is increased from 5° to 7°. However, the thrust value decreases if the angle of attack is further increased. A solid wing sail which possesses 20° angle between its two segments, produces thrusts from 7.61 N to 7.74 N with an increase in the angle of attack from 7° to 8°. The thrust remains steady up to 9° angle of attack and drops dramatically beyond 9°. The highest thrust values that can be obtained for the solid wing sails with 22°, 24°, 26° and 28° angle respectively between the two segments are 8.75 N, 9.10 N, 9.29 N and 9.19 N respectively. The optimum angle of attack for each of the solid wing sails is identified as 7° at which these thrust values are obtained. Therefore, it can be concluded that all the thrust values predicted for the solid wing sails of angles between the two segments above 20° are higher compared to the thrust predicted for the traditional sail. However, the best performance from a solid wing sail is expected when the sail is created with an angle between the two segments above 20° but below or equal to 26°. In addition, 1/29th scale models in the wind tunnel have been tested to observe the flow behaviors around the sails. The experimental results support the numerical observations as the flow behaviors are exactly the same.

Keywords: CFD, drag, sailing vessel, thrust, traditional sail, wing sail

Procedia PDF Downloads 264
379 Prednisone and Its Active Metabolite Prednisolone Attenuate Lipid Accumulation in Macrophages

Authors: H. Jeries, N. Volkova, C. G. Iglesias, M. Najjar, M. Rosenblat, M. Aviram, T. Hayek

Abstract:

Background: Synthetic forms of glucocorticoids (e.g., prednisone, prednisolone) are anti-inflammatory drugs which are widely used in clinical practice. The role of glucocorticoids (GCs) in cardiovascular diseases including atherosclerosis is highly controversial, and their impact on macrophage foam cell formation is still unknown. Our aim was to investigate the effects of prednisone or its active metabolite, prednisolone, on macrophage oxidative stress and lipid metabolism using in-vivo, ex-vivo and in-vitro systems. Methods: The in-vivo study included C57BL/6 mice which were intraperitoneally injected with prednisone or prednisolone (5mg/kg) for 4 weeks, followed by lipid metabolism analyses in the mice aorta, and in peritoneal macrophages (MPM). In the ex-vivo study, we analyzed the effect of serum samples obtained from 9 healthy volunteers before or after treatment with oral prednisone (20mg for 5 days), on J774A.1 macrophage atherogenicity. In-vitro studies were conducted using J774A.1 macrophages, human monocyte derived macrophages (HMDM) and fibroblasts. Cells were incubated with increasing concentrations (0-200 ng/ml) of prednisone or prednisolone, followed by determination of cellular oxidative status, triglyceride and cholesterol metabolism. Results: Prednisone or prednisolone treatment resulted in a significant reduction in triglycerides and mainly in cholesterol cellular accumulation in MPM or in J774A.1 macrophages incubated with human serum. Similar resulted were noted in HMDM or in J774A.1 macrophages which were directly incubated with the GCs. These effects were associated with GCs inhibitory effect on triglycerides and cholesterol biosynthesis rates, throughout downregulation of diacylglycerol acyltransferase1 (DGAT1) expression, and of the sterol regulatory element binding protein (SREBP2) and HMGCR expression, respectively. In parallel to prednisone or prednisolone induced reduction in macrophage triglyceride content, paraoxonase 2 (PON2) expression was significantly upregulated. GCs-induced reduction of cellular triglyceride and cholesterol mass was mediated by the GCs receptors on macrophages since the GCs receptor antagonist (RU 486) abolished these effects. In fibroblasts, unlike macrophages, prednisone or prednisolone showed no anti-atherogenic effects. Conclusions: Prednisone or prednisolone are anti-atherogenic since they protected macrophages from lipid accumulation and foam cell formation.

Keywords: atherosclerosis, cholesterol, foam cell, macrophage, prednisone, prednisolone, triglycerides

Procedia PDF Downloads 131
378 Melaninic Discrimination among Primary School Children

Authors: Margherita Cardellini

Abstract:

To our knowledge, dark skinned children are often victims of discrimination from adults and society, but few studies specifically focus on skin color discrimination on children coming from the same children. Even today, the 'color blind children' ideology is widespread among adults, teachers, and educators and maybe also among scholars, which seem really careful about study expressions of racism in childhood. This social and cultural belief let people think that all the children, because of their age and their brief experience in the world, are disinterested in skin color. Sometimes adults think that children are even incapable of perceiving skin colors and that it could be dangerous to talk about melaninic differences with them because they finally could notice this difference, producing prejudices and racism. Psychology and neurology research projects are showing for many years that even the newborns are already capable of perceiving skin color and ethnic differences by the age of 3 months. Starting from this theoretical framework we conducted a research project to understand if and how primary school children talk about skin colors, picking up any stereotypes or prejudices. Choosing to use the focus group as a methodology to stimulate the group dimension and interaction, several stories about skin color discrimination's episodes within their classroom or school have emerged. Using the photo elicitation technique we chose to stimulate talk about the research object, which is the skin color, asking the children what was ‘the first two things that come into your mind’ when they look the photographs presented during the focus group, which represented dark and light skinned women and men. So, this paper will present some of these stories about episodes of discrimination with an escalation grade of proximity related to the discriminatory act. It will be presented a story of discrimination happened within the school, in an after-school daycare, in the classroom and even episode of discrimination that children tell during the focus groups in the presence of the discriminated child. If it is true that the Declaration of the Right of the Child state that every child should be discrimination free, it’s also true that every adult should protect children from every form of discrimination. How, as adults, can we defend children against discrimination if we cannot admit that even children are potential discrimination’s actors? Without awareness, we risk to devalue these episodes, implicitly confident that the only way to fight against discrimination is to keep her quiet. The right not to be discriminated goes through the right to talk about its own experiences of discrimination and the right to perceive the unfairness of the constant depreciation about skin color or any element of physical diversity. Intercultural education could act as spokesperson for this mission in the belief that difference and plurality could really become elements of potential enrichment for humanity, starting from children.

Keywords: colorism, experiences of discrimination, primary school children, skin color discrimination

Procedia PDF Downloads 185
377 A Complex Network Approach to Structural Inequality of Educational Deprivation

Authors: Harvey Sanchez-Restrepo, Jorge Louca

Abstract:

Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.

Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics

Procedia PDF Downloads 108
376 Reviewers’ Perception of the Studio Jury System: How They View its Value in Architecture and Design Education

Authors: Diane M. Bender

Abstract:

In architecture and design education, students learn and understand their discipline through lecture courses and within studios. A studio is where the instructor works closely with students to help them understand design by doing design work. The final jury is the culmination of the studio learning experience. It’s value and significance are rarely questioned. Students present their work before their peers, instructors, and invited reviewers, known as jurors. These jurors are recognized experts who add a breadth of feedback to students mostly in the form of a verbal critique of the work. Since the design review or jury has been a common element of studio education for centuries, jurors themselves have been instructed in this format. Therefore, they understand its value from both a student and a juror perspective. To better understand how these reviewers see the value of a studio review, a survey was distributed to reviewers at a multi-disciplinary design school within the United States. Five design disciplines were involved in this case study: architecture, graphic design, industrial design, interior design, and landscape architecture. Respondents (n=108) provided written comments about their perceived value of the studio review system. The average respondent was male (64%), between 40-49 years of age, and has attained a master’s degree. Qualitative analysis with thematic coding revealed several themes. Reviewers view the final jury as important because it provides a variety of perspectives from unbiased external practitioners and prepares students for similar presentation challenges they will experience in professional practice. They also see it as a way to validate the assessment and evaluation of students by faculty. In addition, they see a personal benefit for themselves and their firm – the ability to network with fellow jurors, professors, and students (i.e., future colleagues). Respondents also provided additional feedback about the jury system and studio education in general. Typical responses included a desire for earlier engagement with students; a better explanation from the instructor about the project parameters, rubrics/grading, and guidelines for juror involvement; a way to balance giving encouraging feedback versus overly critical comments; and providing training for jurors prior to reviews. While this study focused on the studio review, the findings are equally applicable to other disciplines. Suggestions will be provided on how to improve the preparation of guests in the learning process and how their interaction can positively influence student engagement.

Keywords: assessment, design, jury, studio

Procedia PDF Downloads 52
375 A Perspective of Digital Formation in the Solar Community as a Prototype for Finding Sustainable Algorithmic Conditions on Earth

Authors: Kunihisa Kakumoto

Abstract:

“Purpose”: Global environmental issues are now being raised in a global dimension. By predicting sprawl phenomena beyond the limits of nature with algorithms, we can expect to protect our social life within the limits of nature. It turns out that the sustainable state of the planet now consists in maintaining a balance between the capabilities of nature and the possibilities of our social life. The amount of water on earth is finite. Sustainability is therefore highly dependent on water capacity. A certain amount of water is stored in the forest by planting and green space, and the amount of water can be considered in relation to the green space. CO2 is also absorbed by green plants. "Possible measurements and methods": The concept of the solar community has been introduced in technical papers on the occasion of many international conferences. The solar community concept is based on data collected from one solar model house. This algorithmic study simulates the amount of water stored by lush green vegetation. In addition, we calculated and compared the amount of CO2 emissions from the Taiyo Community and the amount of CO2 reduction from greening. Based on the trial calculation results of these solar communities, we are simulating the sustainable state of the earth as an algorithm trial calculation result. We believe that we should also consider the composition of this solar community group using digital technology as control technology. "Conclusion": We consider the solar community as a prototype for finding sustainable conditions for the planet. The role of water is very important as the supply capacity of water is limited. However, the circulation of social life is not constructed according to the mechanism of nature. This simulation trial calculation is explained using the total water supply volume as an example. According to this process, algorithmic calculations consider the total capacity of the water supply and the population and habitable numbers of the area. Green vegetated land is very important to keep enough water. Green vegetation is also very important to maintain CO2 balance. A simulation trial calculation is possible from the relationship between the CO2 emissions of the solar community and the amount of CO2 reduction due to greening. In order to find this total balance and sustainable conditions, the algorithmic simulation calculation takes into account lush vegetation and total water supply. Research to find sustainable conditions is done by simulating an algorithmic model of the solar community as a prototype. In this one prototype example, it's balanced. The activities of our social life must take place within the permissive limits of natural mechanisms. Of course, we aim for a more ideal balance by utilizing auxiliary digital control technology such as AI.

Keywords: solar community, sustainability, prototype, algorithmic simulation

Procedia PDF Downloads 47
374 On the Right an Effective Administrative Justice in the Republic of Macedonia: Challenges and Problems

Authors: Arlinda Memetaj

Abstract:

A sound system of administrative justice represents a vital element of democratic governance. The proper control of public administration consists not only of a sound civil service framework and legislative oversight, but empowerment of the public and courts to hold public officials accountable for their decision-making through the application of fair administrative procedural rules and the use of appropriate administrative appeals processes and judicial review. The establishment of effective public administration, has been since 1990s among the most 'important and urgent' final strategic objectives of the Republic of Macedonia. To this aim the country has so far adopted a huge series of legislative and strategic documents related to any aspects of the administrative justice system. The latter is designed to strengthen the legal position of citizens, businesses, civic organizations, and other societal subjects. 'Changes and reforms' in this field have been thus the most frequent terms being used in the country for the last more than 20 years. Several years ago the County established Administrative Courts, while permanently amending the Law on the General Administrative procedure (LGAP). The new LGAP was adopted in 2015 and it introduced considerable innovations concerned. The most recent inputs in this regard includes the National Public Administration Reform Strategy 2017 – 2022, one of the key expected result of which includes both providing effective protection of the citizens` rights. In doing the aforesaid however there is still a series of interrelated shortcomings in this regard, such as (just to mention few) the complex appeal procedure, delays in enforcing court rulings, etc. Against the above background, the paper firstly describes the Macedonian institutional and legislative framework in the above field, and then illustrates the shortcomings therein. It finally claims that the current status quo situation may be overcome only if there is a proper implementation of the administrative courts decisions and far stricter international monitoring process thereof. A new approach and strong political commitment from the highest political leadership is thus absolutely needed to ensure the principles of transparency, accountability and merit in public administration. The main method used in this paper is the descriptive, analytical and comparative one due to the very character of the paper itself.

Keywords: administrative justice, administrative procedure, administrative courts/disputes, European Human Rights Court, human rights, monitoring, reform, benefit.

Procedia PDF Downloads 139
373 Freudian Psychoanalysis Towards an Ethics of Finitude

Authors: Katya E. Manalastas

Abstract:

This thesis is a dialogue with Freud about vulnerability and any forms of transience we encounter in life. This study argues that Freud’s Ethics of Finitude, which is framed within the psychoanalytic context, is a critical theory about how human beings fail to become what they are because of their attachment to their illusions—to their visions of perfection and immortality. Freud’s Ethics of Finitude positions itself between our detachment to ideals and recognition of our own death through our loved one. His texts portray the predicament of the finite individual who suffers from feelings of guilt and anxiety because of his failure to live up to the demands of his idealistic civilized society. The civilized society has overestimated men’s susceptibility to culture. It imposes excessive sublimation, conformity to rigid moral ideals, and instinctive repression to manage human aggression. However, by doing this, civilization becomes a main source of men’s suffering. The lack of instinctive freedom will result in a community of tamed but unhappy people. Civilization has also constructed theories and measures to rule out death and pain from the realities of life. Therefore, a man lives his life repressing his instincts and ignorant of his own mortality. For Freud, war and neurosis are just few of the consequences of a civilization that imprisons the individual from cultural hypocrisy instead of giving more play to truthfulness. The occurrence of Great War destroyed our pride in the attainments of civilization and let loose the hostile impulses within us which we thought had been totally eradicated by means of instinctive repression and sublimation. War destroyed most of the things that we had loved and showed us the impermanence of all the things that we had deemed perfect and everlasting. This chaotic event also revealed the damaging impact of our attachment to past values that no longer bind us; our futile attempts to escape suffering; and our refusal to confront the painfulness of loss and mourning. With this given backdrop, this study launches Freud’s Ethics of Finitude—which culminates not in the submission of an individual to the unquestioned authority nor in the blind optimism and love for illusory happiness but in the pedagogy of mourning which brings forth the authentic education of man towards the truth about himself. His Ethics of Finitude is a form of labor in and through which the individual steps out of the realm of illusions and ideals that hinder him to confront his imperfections and accept the difficulties of existence. Through his analysis of the Great War, Freud seeks to awaken in us our ability to evaluate the way we see ourselves and to live our lives with death in mind. His Ethics of Finitude leads us to the fulfillment of our first duty as a living being, which is to endure life. We can only endure life if we are prepared to die and let go.

Keywords: critical theory, ethics of finitude, psychoanalysis, Sigmund Freud

Procedia PDF Downloads 63
372 Steel Concrete Composite Bridge: Modelling Approach and Analysis

Authors: Kaviyarasan D., Satish Kumar S. R.

Abstract:

India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.

Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge

Procedia PDF Downloads 174
371 Are Oral Health Conditions Associated with Children’s School Performance and School Attendance in the Kingdom of Bahrain - A Life Course Approach

Authors: Seham A. S. Mohamed, Sarah R. Baker, Christopher Deery, Mario V. Vettore

Abstract:

Background: The link between oral health conditions and school performance and attendance remain unclear among Middle Eastern children. The association has been studied extensively in the Western region; however, several concerns have been raised regarding the reliability and validity of measures, low quality of studies, inadequate inclusion of potential confounders, and the lack of a conceptual framework. These limitations have meant that, to date, there has been no detailed understanding of the association or of the key social, clinical, behavioural and parental factors which may impact the association. Aim: To examine the association between oral health conditions and children’s school performance and attendance at Grade 2 in Muharraq city in the Kingdom of Bahrain using Heilmann et al.’s (2015) life course framework for oral health. Objectives: To (1) describe the prevalence of oral health conditions among 7-8 years old schoolchildren in the city of Muharraq; (2) analyse the social, biological, behavioural, and parental pathways that link early and current life exposures with children’s current oral health status; (3) examine the association between oral health conditions and school performance and attendance among schoolchildren; (4) explore the early and current life course social, biological, behavioural and parental factors associated with children’s school outcomes. Design: A time-ordered-cross-sectional study was conducted with 466 schoolchildren aged 7-8 years and their parents from Muharraq city in KoB. Data were collected through parents’ self-administered questionnaires, children’s face-face interviews, and dental clinical examinations. Outcome variables, including school performance and school attendance data, were obtained from the parents and school records. The data were analysed using structural equation modelling (SEM). Results: Dental caries, the consequence of dental caries (PUFA/pufa), and enamel developmental defects (EDD) prevalence were 93.4%, 25.7%, and 17.2%, respectively. The findings from the SEM showed that children born in families with high SES were less likely to suffer from dentine dental caries (β= -0.248) and more likely to earn high school performance (β= 0.136) at 7-8 years of age in Muharraq. From the current life course of children, the dental plaque was associated significantly and directly with enamel caries (β= 0.094), dentine caries (β= 0.364), treated teeth (filled or extracted because of dental caries) (β= 0.121), and indirectly associated with dental pain (β= 0.057). Further, dentine dental caries was associated significantly and directly with low school performance (β= -0.155). At the same time, the dental plaque was indirectly associated with low school performance via dental caries (β = −0.044). Conversely, treated teeth were associated directly with high school performance (β= 0.100). Notably, none of the OHCs, biological, SES, behavioural, or parental conditions was related to school attendance in children. Conclusion: The life course approach was adequate to examine the role of OHCs on children’s school performance and attendance. Birth and current (7-8-year-olds) social factors were significant predictors of poor OH and poor school performance.

Keywords: dental caries, life course, Bahrain, school outcomes

Procedia PDF Downloads 90
370 The Creation of Calcium Phosphate Coating on Nitinol Substrate

Authors: Kirill M. Dubovikov, Ekaterina S. Marchenko, Gulsharat A. Baigonakova

Abstract:

NiTi alloys are widely used as implants in medicine due to their unique properties such as superelasticity, shape memory effect and biocompatibility. However, despite these properties, one of the major problems is the release of nickel after prolonged use in the human body under dynamic stress. This occurs due to oxidation and cracking of NiTi implants, which provokes nickel segregation from the matrix to the surface and release into living tissues. As we know, nickel is a toxic element and can cause cancer, allergies, etc. One of the most popular ways to solve this problem is to create a corrosion resistant coating on NiTi. There are many coatings of this type, but not all of them have good biocompatibility, which is very important for medical implants. Coatings based on calcium phosphate phases have excellent biocompatibility because Ca and P are the main constituents of the mineral part of human bone. This fact suggests that a Ca-P coating on NiTi can enhance osteogenesis and accelerate the healing process. Therefore, the aim of this study is to investigate the structure of Ca-P coating on NiTi substrate. Plasma assisted radio frequency (RF) sputtering was used to obtain this film. This method was chosen because it allows the crystallinity and morphology of the Ca-P coating to be controlled by the sputtering parameters. It allows us to obtain three different NiTi samples with Ca-P coating. XRD, AFM, SEM and EDS were used to study the composition, structure and morphology of the coating phase. Scratch tests were carried out to evaluate the adhesion of the coating to the substrate. Wettability tests were used to investigate the hydrophilicity of the different coatings and to suggest which of them had better biocompatibility. XRD showed that the coatings of all samples were hydroxyapatite, but the matrix was represented by TiNi intermetallic compounds such as B2, Ti2Ni and Ni3Ti. The SEM shows that the densest and defect-free coating has only one sample after three hours of sputtering. Wettability tests show that the sample with the densest coating has the lowest contact angle of 40.2° and the largest free surface area of 57.17 mJ/m2, which is mostly disperse. A scratch test was carried out to investigate the adhesion of the coating to the surface and it was shown that all coatings were removed by a cohesive mechanism. However, at a load of 30N, the indenter reached the substrate in two out of three samples, except for the sample with the densest coating. It was concluded that the most promising sputtering mode was the third, which consisted of three hours of deposition. This mode produced a defect-free Ca-P coating with good wettability and adhesion.

Keywords: biocompatibility, calcium phosphate coating, NiTi alloy, radio frequency sputtering.

Procedia PDF Downloads 60
369 An Improved Approach for Hybrid Rocket Injection System Design

Authors: M. Invigorito, G. Elia, M. Panelli

Abstract:

Hybrid propulsion combines beneficial properties of both solid and liquid rockets, such as multiple restarts, throttability as well as simplicity and reduced costs. A nitrous oxide (N2O)/paraffin-based hybrid rocket engine demonstrator is currently under development at the Italian Aerospace Research Center (CIRA) within the national research program HYPROB, funded by the Italian Ministry of Research. Nitrous oxide belongs to the class of self-pressurizing propellants that exhibit a high vapor pressure at standard ambient temperature. This peculiar feature makes those fluids very attractive for space rocket applications because it avoids the use of complex pressurization systems, leading to great benefits in terms of weight savings and reliability. To avoid feed-system-coupled instabilities, the phase change is required to occur through the injectors. In this regard, the oxidizer is stored in liquid condition while target chamber pressures are designed to lie below vapor pressure. The consequent cavitation and flash vaporization constitute a remarkably complex phenomenology that arises great modelling challenges. Thus, it is clear that the design of the injection system is fundamental for the full exploitation of hybrid rocket engine throttability. The Analytical Hierarchy Process has been used to select the injection architecture as best compromise among different design criteria such as functionality, technology innovation and cost. The impossibility to use engineering simplified relations for the dimensioning of the injectors led to the needs of applying a numerical approach based on OpenFOAM®. The numerical tool has been validated with selected experimental data from literature. Quantitative, as well as qualitative comparisons are performed in terms of mass flow rate and pressure drop across the injector for several operating conditions. The results show satisfactory agreement with the experimental data. Modeling assumptions, together with their impact on numerical predictions are discussed in the paper. Once assessed the reliability of the numerical tool, the injection plate has been designed and sized to guarantee the required amount of oxidizer in the combustion chamber and therefore to assure high combustion efficiency. To this purpose, the plate has been designed with multiple injectors whose number and diameter have been selected in order to reach the requested mass flow rate for the two operating conditions of maximum and minimum thrust. The overall design has been finally verified through three-dimensional computations in cavitating non-reacting conditions and it has been verified that the proposed design solution is able to guarantee the requested values of mass flow rates.

Keywords: hybrid rocket, injection system design, OpenFOAM®, cavitation

Procedia PDF Downloads 204
368 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 227
367 Optimal Delivery of Two Similar Products to N Ordered Customers

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering products located at a central depot to customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from the depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity of the goods that must be delivered. In the present work, we present a specific capacitated stochastic vehicle routing problem which has realistic applications to distributions of materials to shops or to healthcare facilities or to military units. A vehicle starts its route from a depot loaded with items of two similar but not identical products. We name these products, product 1 and product 2. The vehicle must deliver the products to N customers according to a predefined sequence. This means that first customer 1 must be serviced, then customer 2 must be serviced, then customer 3 must be serviced and so on. The vehicle has a finite capacity and after servicing all customers it returns to the depot. It is assumed that each customer prefers either product 1 or product 2 with known probabilities. The actual preference of each customer becomes known when the vehicle visits the customer. It is also assumed that the quantity that each customer demands is a random variable with known distribution. The actual demand is revealed upon the vehicle’s arrival at customer’s site. The demand of each customer cannot exceed the vehicle capacity and the vehicle is allowed during its route to return to the depot to restock with quantities of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. If there is shortage for the desired product, it is permitted to deliver the other product at a reduced price. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the expected total cost among all possible strategies. It is possible to find the optimal routing strategy using a suitable stochastic dynamic programming algorithm. It is also possible to prove that the optimal routing strategy has a specific threshold-type structure, i.e. it is characterized by critical numbers. This structural result enables us to construct an efficient special-purpose dynamic programming algorithm that operates only over those routing strategies having this structure. The findings of the present study lead us to the conclusion that the dynamic programming method may be a very useful tool for the solution of specific vehicle routing problems. A problem for future research could be the study of a similar stochastic vehicle routing problem in which the vehicle instead of delivering, it collects products from ordered customers.

Keywords: collection of similar products, dynamic programming, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 258
366 Transparency Obligations under the AI Act Proposal: A Critical Legal Analysis

Authors: Michael Lognoul

Abstract:

In April 2021, the European Commission released its AI Act Proposal, which is the first policy proposal at the European Union level to target AI systems comprehensively, in a horizontal manner. This Proposal notably aims to achieve an ecosystem of trust in the European Union, based on the respect of fundamental rights, regarding AI. Among many other requirements, the AI Act Proposal aims to impose several generic transparency obligationson all AI systems to the benefit of natural persons facing those systems (e.g. information on the AI nature of systems, in case of an interaction with a human). The Proposal also provides for more stringent transparency obligations, specific to AI systems that qualify as high-risk, to the benefit of their users, notably on the characteristics, capabilities, and limitations of the AI systems they use. Against that background, this research firstly presents all such transparency requirements in turn, as well as related obligations, such asthe proposed obligations on record keeping. Secondly, it focuses on a legal analysis of their scope of application, of the content of the obligations, and on their practical implications. On the scope of transparency obligations tailored for high-risk AI systems, the research notably notes that it seems relatively narrow, given the proposed legal definition of the notion of users of AI systems. Hence, where end-users do not qualify as users, they may only receive very limited information. This element might potentially raise concern regarding the objective of the Proposal. On the content of the transparency obligations, the research highlights that the information that should benefit users of high-risk AI systems is both very broad and specific, from a technical perspective. Therefore, the information required under those obligations seems to create, prima facie, an adequate framework to ensure trust for users of high-risk AI systems. However, on the practical implications of these transparency obligations, the research notes that concern arises due to potential illiteracy of high-risk AI systems users. They might not benefit from sufficient technical expertise to fully understand the information provided to them, despite the wording of the Proposal, which requires that information should be comprehensible to its recipients (i.e. users).On this matter, the research points that there could be, more broadly, an important divergence between the level of detail of the information required by the Proposal and the level of expertise of users of high-risk AI systems. As a conclusion, the research provides policy recommendations to tackle (part of) the issues highlighted. It notably recommends to broaden the scope of transparency requirements for high-risk AI systems to encompass end-users. It also suggests that principles of explanation, as they were put forward in the Guidelines for Trustworthy AI of the High Level Expert Group, should be included in the Proposal in addition to transparency obligations.

Keywords: aI act proposal, explainability of aI, high-risk aI systems, transparency requirements

Procedia PDF Downloads 283
365 Developmental Relationships between Alcohol Problems and Internalising Symptoms in a Longitudinal Sample of College Students

Authors: Lina E. Homman, Alexis C. Edwards, Seung Bin Cho, Danielle M. Dick, Kenneth S. Kendler

Abstract:

Research supports an association between alcohol problems and internalising symptoms, but the understanding of how the two phenotypes relate to each other is poor. It has been hypothesized that the relationship between the phenotypes is causal; however investigations in regards to direction are inconsistent. Clarity of the relationship between the two phenotypes may be provided by investigating the phenotypes developmental inter-relationships longitudinally. The objective of the study was to investigate a) changes in alcohol problems and internalising symptoms in college students across time and b) the direction of effect of growth between alcohol problems and internalising symptoms from late adolescent to emerging adulthood c) possible gender differences. The present study adds to the knowledge of comorbidity of alcohol problems and internalising symptoms by examining a longitudinal sample of college students and by examining the simultaneous development of the symptoms. A sample of college students is of particular interest as symptoms of both phenotypes often have their onset around this age. A longitudinal sample of college students from a large, urban, public university in the United States was used. Data was collected over a time period of 2 years at 3 time points. Latent growth models were applied to examine growth trajectories. Parallel process growth models were used to assess whether initial level and rate of change of one symptom affected the initial level and rate of change of the second symptom. Possible effects of gender and ethnicity were investigated. Alcohol problems significantly increased over time, whereas internalizing symptoms remained relatively stable. The two phenotypes were significantly correlated in each wave, correlations were stronger among males. Initial level of alcohol problems was significantly positively correlated with initial level of internalising symptoms. Rate of change of alcohol problems positively predicted rate of change of internalising symptoms for females but not for males. Rate of change of internalising symptoms did not predict rate of change of alcohol problems for either gender. Participants of Black and Asian ethnicities indicated significantly lower levels of alcohol problems and a lower increase of internalising symptoms across time, compared to White participants. Participants of Black ethnicity also reported significantly lower levels of internalising symptoms compared to White participants. The present findings provide additional support for a positive relationship between alcohol problems and internalising symptoms in youth. Our findings indicated that both internalising symptoms and alcohol problems increased throughout the sample and that the phenotypes were correlated. The findings mainly implied a bi-directional relationship between the phenotypes in terms of significant associations between initial levels as well as rate of change. No direction of causality was indicated in males but significant results were found in females where alcohol problems acted as the main driver for the comorbidity of alcohol problems and internalising symptoms; alcohol may have more detrimental effects in females than in males. Importantly, our study examined a population-based longitudinal sample of college students, revealing that the observed relationships are not limited to individuals with clinically diagnosed mental health or substance use problems.

Keywords: alcohol, comorbidity, internalising symptoms, longitudinal modelling

Procedia PDF Downloads 336
364 3D CFD Model of Hydrodynamics in Lowland Dam Reservoir in Poland

Authors: Aleksandra Zieminska-Stolarska, Ireneusz Zbicinski

Abstract:

Introduction: The objective of the present work was to develop and validate a 3D CFD numerical model for simulating flow through 17 kilometers long dam reservoir of a complex bathymetry. In contrast to flowing waters, dam reservoirs were not emphasized in the early years of water quality modeling, as this issue has never been the major focus of urban development. Starting in the 1970s, however, it was recognized that natural and man-made lakes are equal, if not more important than estuaries and rivers from a recreational standpoint. The Sulejow Reservoir (Central Poland) was selected as the study area as representative of many lowland dam reservoirs and due availability of a large database of the ecological, hydrological and morphological parameters of the lake. Method: 3D, 2-phase and 1-phase CFD models were analysed to determine hydrodynamics in the Sulejow Reservoir. Development of 3D, 2-phase CFD model of flow requires a construction of mesh with millions of elements and overcome serious convergence problems. As 1-phase CFD model of flow in relation to 2-phase CFD model excludes from the simulations the dynamics of waves only, which should not change significantly water flow pattern for the case of lowland, dam reservoirs. In 1-phase CFD model, the phases (water-air) are separated by a plate which allows calculations of one phase (water) flow only. As the wind affects velocity of flow, to take into account the effect of the wind on hydrodynamics in 1-phase CFD model, the plate must move with speed and direction equal to the speed and direction of the upper water layer. To determine the velocity at which the plate will move on the water surface and interacts with the underlying layers of water and apply this value in 1-phase CFD model, the 2D, 2-phase model was elaborated. Result: Model was verified on the basis of the extensive flow measurements (StreamPro ADCP, USA). Excellent agreement (an average error less than 10%) between computed and measured velocity profiles was found. As a result of work, the following main conclusions can be presented: •The results indicate that the flow field in the Sulejow Reservoir is transient in nature, with swirl flows in the lower part of the lake. Recirculating zones, with the size of even half kilometer, may increase water retention time in this region •The results of simulations confirm the pronounced effect of the wind on the development of the water circulation zones in the reservoir which might affect the accumulation of nutrients in the epilimnion layer and result e.g. in the algae bloom. Conclusion: The resulting model is accurate and the methodology develop in the frame of this work can be applied to all types of storage reservoir configurations, characteristics, and hydrodynamics conditions. Large recirculating zones in the lake which increase water retention time and might affect the accumulation of nutrients were detected. Accurate CFD model of hydrodynamics in large water body could help in the development of forecast of water quality, especially in terms of eutrophication and water management of the big water bodies.

Keywords: CFD, mathematical modelling, dam reservoirs, hydrodynamics

Procedia PDF Downloads 392
363 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 167
362 Culture Dimensions of Information Systems Security in Saudi Arabia National Health Services

Authors: Saleh Alumaran, Giampaolo Bella, Feng Chen

Abstract:

The study of organisations’ information security cultures has attracted scholars as well as healthcare services industry to research the topic and find appropriate tools and approaches to develop a positive culture. The vast majority of studies in Saudi national health services are on the use of technology to protect and secure health services information. On the other hand, there is a lack of research on the role and impact of an organisation’s cultural dimensions on information security. This research investigated and analysed the role and impact of cultural dimensions on information security in Saudi Arabia health service. Hypotheses were tested and two surveys were carried out in order to collect data and information from three major hospitals in Saudi Arabia (SA). The first survey identified the main cultural-dimension problems in SA health services and developed an initial information security culture framework model. The second survey evaluated and tested the developed framework model to test its usefulness, reliability and applicability. The model is based on human behaviour theory, where the individual’s attitude is the key element of the individual’s intention to behave as well as of his or her actual behaviour. The research identified six cultural dimensions: Saudi national culture, Saudi health service leadership, employees’ trust, technology, multicultural interactions and employees’ job roles. The research also identified a set of cultural sub-dimensions. These include working values and norms, tribe values and norms, attitudes towards women, power sharing, vision, social interaction, respect and understanding, hospital intra-net, hospital employees’ language(s) used, multi-national culture, communication system, employees’ job satisfaction and job security. The research identified that (a) the human behaviour towards medical information in SA is one of the main threats to information security and one of the main challenges to SA health authority, (b) The current situation of SA hospitals’ IS cultures is falling short in protecting medical information due to the current value and norms towards information security, (c) Saudi national culture and employees’ job role are the main dimensions playing major roles in the employees’ attitude, and technology is the least important dimension playing a role in the employees’ attitudes.

Keywords: cultural dimension, electronic health record, information security, privacy

Procedia PDF Downloads 341
361 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement

Authors: Rajkumar Ghosh

Abstract:

Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.

Keywords: earthquake, out-of-sequence thrust, disaster, human life

Procedia PDF Downloads 65