Search results for: convergence and smoothness
132 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation
Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes
Abstract:
The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization
Procedia PDF Downloads 314131 Uncanny Orania: White Complicity as the Abject of the Discursive Construction of Racism
Authors: Daphne Fietz
Abstract:
This paper builds on a reflection on an autobiographical experience of uncanniness during fieldwork in the white Afrikaner settlement Orania in South Africa. Drawing on Kristeva’s theory of abjection to establish a theory of Whiteness which is based on boundary threats, it is argued that the uncanny experience as the emergence of the abject points to a moment of crisis of the author’s Whiteness. The emanating abject directs the author to her closeness or convergence with Orania's inhabitants, that is a reciprocity based on mutual Whiteness. The experienced confluence appeals to the author’s White complicity to racism. With recourse to Butler’s theory of subjectivation, the abject, White complicity, inhabits both the outside of a discourse on racism, and of the 'self', as 'I' establish myself in relation to discourse. In this view, the qualities of the experienced abject are linked to the abject of discourse on racism, or, in other words, its frames of intelligibility. It then becomes clear, that discourse on (overt) racism functions as a necessary counter-image through which White morality is established instead of questioned, because here, by White reasoning, the abject of complicity to racism is successfully repressed, curbed, as completely impossible in the binary construction. Hence, such discourse endangers a preservation of racism in its pre-discursive and structural forms as long as its critique does not encompass its own location and performance in discourse. Discourse on overt racism is indispensable to White ignorance as it covers underlying racism and pre-empts further critique. This understanding directs us towards a form of critique which does necessitate self-reflection, uncertainty, and vigilance, which will be referred to as a discourse of relationality. Such a discourse diverges from the presumption of a detached author as a point of reference, and instead departs from attachment, dependence, mutuality and embraces the visceral as a resource of knowledge of relationality. A discourse of relationality points to another possibility of White engagement with Whiteness and racism and further promotes a conception of responsibility, which allows for and highlights dispossession and relationality in contrast to single agency and guilt.Keywords: abjection, discourse, relationality, the visceral, whiteness
Procedia PDF Downloads 157130 Effects of Aerodynamic on Suspended Cables Using Non-Linear Finite Element Approach
Authors: Justin Nwabanne, Sam Omenyi, Jeremiah Chukwuneke
Abstract:
This work presents structural nonlinear static analysis of a horizontal taut cable using Finite Element Analysis (FEA) method. The FEA was performed analytically to determine the tensions at each nodal point and subsequently, performed based on finite element displacement method computationally using the FEA software, ANSYS 14.0 to determine their behaviour under the influence of aerodynamic forces imposed on the cable. The convergence procedure is adapted into the method to prevent excessive displacements through the computations. The work compared the two FEA cases by examining the effectiveness of the analytical model in describing the response with few degrees of freedom and the ability of the nonlinear finite element procedure adopted to capture the complex features of cable dynamics with reference to the aerodynamic external influence. Results obtained from this work explain that the analytic FEM results without aerodynamic influence show a parabolic response with an optimum deflection at nodal points 12 and 13 with the cable weight at nodes 12 and 13 having the value -1.002936N while for the cable tension shows an optimum deflection value for nodes 12 and 13 at -189396.97kg/km. The maximum displacement for the cable system was obtained from ANSYS 14.0 as 4483.83 mm for X, Y and Z components of displacements at node number 2 while the maximum displacement obtained is 4218.75mm for all the directional components. The dynamic behaviour of a taut cable investigated has application in a typical power transmission line. Aerodynamic influences on the cables were considered using FEA approach by employing ANSYS 14.0 showed a complex modal behaviour as expected.Keywords: aerodynamics, cable tension and weight, finite element analysis, nodal, non-linear model, optimum deflection, suspended cable, transmission line
Procedia PDF Downloads 277129 Fully Eulerian Finite Element Methodology for the Numerical Modeling of the Dynamics of Heart Valves
Authors: Aymen Laadhari
Abstract:
During the last decade, an increasing number of contributions have been made in the fields of scientific computing and numerical methodologies applied to the study of the hemodynamics in the heart. In contrast, the numerical aspects concerning the interaction of pulsatile blood flow with highly deformable thin leaflets have been much less explored. This coupled problem remains extremely challenging and numerical difficulties include e.g. the resolution of full Fluid-Structure Interaction problem with large deformations of extremely thin leaflets, substantial mesh deformations, high transvalvular pressure discontinuities, contact between leaflets. Although the Lagrangian description of the structural motion and strain measures is naturally used, many numerical complexities can arise when studying large deformations of thin structures. Eulerian approaches represent a promising alternative to readily model large deformations and handle contact issues. We present a fully Eulerian finite element methodology tailored for the simulation of pulsatile blood flow in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets. Our method enables to use a fluid solver on a fixed mesh, whilst being able to easily model the mechanical properties of the valve. We introduce a semi-implicit time integration scheme based on a consistent NewtonRaphson linearization. A variant of the classical Newton method is introduced and guarantees a third-order convergence. High-fidelity computational geometries are built and simulations are performed under physiological conditions. We address in detail the main features of the proposed method, and we report several experiments with the aim of illustrating its accuracy and efficiency.Keywords: eulerian, level set, newton, valve
Procedia PDF Downloads 277128 Partnering with Stakeholders to Secure Digitization of Water
Authors: Sindhu Govardhan, Kenneth G. Crowther
Abstract:
Modernisation of the water sector is leading to increased connectivity and integration of emerging technologies with traditional ones, leading to new security risks. The convergence of Information Technology (IT) with Operation Technology (OT) results in solutions that are spread across larger geographic areas, increasingly consist of interconnected Industrial Internet of Things (IIOT) devices and software, rely on the integration of legacy with modern technologies, use of complex supply chain components leading to complex architectures and communication paths. The result is that multiple parties collectively own and operate these emergent technologies, threat actors find new paths to exploit, and traditional cybersecurity controls are inadequate. Our approach is to explicitly identify and draw data flows that cross trust boundaries between owners and operators of various aspects of these emerging and interconnected technologies. On these data flows, we layer potential attack vectors to create a frame of reference for evaluating possible risks against connected technologies. Finally, we identify where existing controls, mitigations, and other remediations exist across industry partners (e.g., suppliers, product vendors, integrators, water utilities, and regulators). From these, we are able to understand potential gaps in security, the roles in the supply chain that are most likely to effectively remediate those security gaps, and test cases to evaluate and strengthen security across these partners. This informs a “shared responsibility” solution that recognises that security is multi-layered and requires collaboration to be successful. This shared responsibility security framework improves visibility, understanding, and control across the entire supply chain, and particularly for those water utilities that are accountable for safe and continuous operations.Keywords: cyber security, shared responsibility, IIOT, threat modelling
Procedia PDF Downloads 75127 Information Visualization Methods Applied to Nanostructured Biosensors
Authors: Osvaldo N. Oliveira Jr.
Abstract:
The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique
Procedia PDF Downloads 334126 Orbit Determination from Two Position Vectors Using Finite Difference Method
Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.
Abstract:
An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.Keywords: finite difference method, grid generation, NavIC system, orbit perturbation
Procedia PDF Downloads 82125 Factors of Divergence of Shari’Ah Supervisory Opinions and Its Effects on the Harmonization of Islamic Banking Products and Services
Authors: Dlir Abdullah Ahmed
Abstract:
Overall aims of this study are to investigate the effects of differences of opinions among Shari’ah supervisory bodies on standardization and internationalization of Islamic banking products and services. The study has used semi-structured in-depth interview where five respondents from both the Middle East and Malaysia Shari’ah advisors participated in the interview sessions. The data were analyzed by both manual and software techniques. The findings reveal that indeed there are differences of opinions among Shari’ah advisors in different jurisdictions. These differences are due to differences in educational background, schools of thoughts, environment in which they operate, and legal requirements. Moreover, the findings also reveal that these differences in opinions among Shari’ah bodies create confusions among public and bankers, and negatively affect standardization of Islamic banking transactions. In addition, the study has explored the possibility to develop Islamic-based products. However, the finding shows that it is difficult for the industry to have Islamic-based products due to high competition from conventional counterpart, legal constraints and moral hazard. Furthermore, the findings indicate that lack of political will and unity, lack of technology are the main constraints to internationalization of Islamic banking products. Last but not least, the study found that there are possibility of convergence of opinions, standardization of Islamic banking products and services if there are unified international Shari’ah h advisory council, international basic requirements for Islamic Shari’ah h advisors, and increase training and educations of Islamic bankers. This study has several implications to the bankers, policymakers and researchers. The policymakers should be able to resolve their political differences and set up unified international advisory council and international research and development center. The bankers should increase training and educations of the workforce as well improve on their banking infrastructure to facility cross-border transactions.Keywords: Shari’ah h views, Islamic banking, products & services, standardization.
Procedia PDF Downloads 69124 Determining the Extent and Direction of Relief Transformations Caused by Ski Run Construction Using LIDAR Data
Authors: Joanna Fidelus-Orzechowska, Dominika Wronska-Walach, Jaroslaw Cebulski
Abstract:
Mountain areas are very often exposed to numerous transformations connected with the development of tourist infrastructure. In mountain areas in Poland ski tourism is very popular, so agricultural areas are often transformed into tourist areas. The construction of new ski runs can change the direction and rate of slope development. The main aim of this research was to determine geomorphological and hydrological changes within slopes caused by ski run constructions. The study was conducted in the Remiaszów catchment in the Inner Polish Carpathians (southern Poland). The mean elevation of the catchment is 859 m a.s.l. and the maximum is 946 m a.s.l. The surface area of the catchment is 1.16 km2, of which 16.8% is the area of the two studied ski runs. The studied ski runs were constructed in 2014 and 2015. In order to determine the relief transformations connected with new ski run construction high resolution LIDAR data was analyzed. The general relief changes in the studied catchment were determined on the basis of ALS (Airborne Laser Scanning ) data obtained before (2013) and after (2016) ski run construction. Based on the two sets of ALS data a digital elevation models of differences (DoDs) was created, which made it possible to determine the quantitative relief changes in the entire studied catchment. Additionally, cross and longitudinal profiles were calculated within slopes where new ski runs were built. Detailed data on relief changes within selected test surfaces was obtained based on TLS (Terrestrial Laser Scanning). Hydrological changes within the analyzed catchment were determined based on the convergence and divergence index. The study shows that the construction of the new ski runs caused significant geomorphological and hydrological changes in the entire studied catchment. However, the most important changes were identified within the ski slopes. After the construction of ski runs the entire catchment area lowered about 0.02 m. Hydrological changes in the studied catchment mainly led to the interruption of surface runoff pathways and changes in runoff direction and geometry.Keywords: hydrological changes, mountain areas, relief transformations, ski run construction
Procedia PDF Downloads 142123 Logical Thinking: A Surprising and Promising Insight for Creative and Critical Thinkers
Authors: Luc de Brabandere
Abstract:
Searchers in various disciplines have long tried to understand how a human being thinks. Most of them seem to agree that the brain works in two very different modes. For us, the first phase of thought imagines, diverges, and unlocks the field of possibilities. The second phase, judges converge and choose. But if we were to stop there, that would give the impression that thought is essentially an individual effort that seldom depends on context. This is, however, not the case. Whether we be a champion in creativity, so primarily in induction, or a master in logic where we are confronted with reality, the ideas we layout are indeed destined to be presented to third parties. They should therefore be exposed, defended, communicated, negotiated, or even sold. Regardless of the quality of the concepts we craft (creative thinking) and the interferences we build (logical thinking) we will take one day, or another, be confronted by people whose beliefs, opinions and ideas differ from ours (critical thinking). Logic and critique: The shared characteristics of logical and critical thoughts include a three-level structure of reasoning invented by the Greeks. For the first time in history, Aristotle tried to model thought deployable in three stages: the concept, the statement, and the reasoning. The three levels can be assessed according to different criteria. A concept is more or less useful, a statement is true or false, and reasoning is right or wrong. This three-level structure allows us to differentiate logic and critique, where the intention and words used are not the same. Logic only deals with the structure of reasoning and exhausts the problem. It regards premises as acquired and excludes the debate. Logic is in all certainty and pursues the truth. Critique is most probably searching for the plausible. Logic and creativity: Many known models present the brain as a two-stroke engine (divergence vs convergence, fast vs. slow, left-brain vs right-brain, Yin vs Yang, etc.). But that’s not the only thing. “Why didn’t we think of that before?” How often have we heard that sentence? A creative idea is the outcome of logic, but you can only understand it afterward! Through the use of exercises, we will witness how logic and creativity work together. A third theme is hidden behind the two main themes of the conference: logical thought, which the author can shed some light on.Keywords: creativity, logic, critique, digital
Procedia PDF Downloads 87122 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang
Abstract:
Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.Keywords: CNN, classification, deep learning, GAN, Resnet50
Procedia PDF Downloads 85121 Stability of Pump Station Cavern in Chagrin Shale with Time
Authors: Mohammad Moridzadeh, Mohammad Djavid, Barry Doyle
Abstract:
An assessment of the long-term stability of a cavern in Chagrin shale excavated by the sequential excavation method was performed during and after construction. During the excavation of the cavern, deformations of rock mass were measured at the surface of excavation and within the rock mass by surface and deep measurement instruments. Rock deformations were measured during construction which appeared to result from the as-built excavation sequence that had potentially disturbed the rock and its behavior. Also some additional time dependent rock deformations were observed during and post excavation. Several opinions have been expressed to explain this time dependent deformation including stress changes induced by excavation, strain softening (or creep) in the beddings with and without clay and creep of the shaley rock under compressive stresses. In order to analyze and replicate rock behavior observed during excavation, including current and post excavation elastic, plastic, and time dependent deformation, Finite Element Analysis (FEA) was performed. The analysis was also intended to estimate long term deformation of the rock mass around the excavation. Rock mass behavior including time dependent deformation was measured by means of rock surface convergence points, MPBXs, extended creep testing on the long anchors, and load history data from load cells attached to several long anchors. Direct creep testing of Chagrin Shale was performed on core samples from the wall of the Pump Room. Results of these measurements were used to calibrate the FEA of the excavation. These analyses incorporate time dependent constitutive modeling for the rock to evaluate the potential long term movement in the roof, walls, and invert of the cavern. The modeling was performed due to the concerns regarding the unanticipated behavior of the rock mass as well as the forecast of long term deformation and stability of rock around the excavation.Keywords: Cavern, Chagrin shale, creep, finite element.
Procedia PDF Downloads 350120 The Colouration of Additive-Manufactured Polymer
Authors: Abisuga Oluwayemisi Adebola, Kerri Akiwowo, Deon de Beer, Kobus Van Der Walt
Abstract:
The convergence of additive manufacturing (AM) and traditional textile dyeing techniques has initiated innovative possibilities for improving the visual application and customization potential of 3D-printed polymer objects. Textile dyeing techniques have progressed to transform fabrics with vibrant colours and complex patterns over centuries. The layer-by-layer deposition characteristic of AM necessitates adaptations in dye application methods to ensure even colour penetration across complex surfaces. Compatibility between dye formulations and polymer matrices influences colour uptake and stability, demanding careful selection and testing of dyes for optimal results. This study investigates the development interaction between these areas, revealing the challenges and opportunities of applying textile dyeing methods to colour 3D-printed polymer materials. The method explores three innovative approaches to colour the 3D-printed polymer object: (a) Additive Manufacturing of a Prototype, (b) the traditional dyebath method, and (c) the contemporary digital sublimation technique. The results show that the layer lines inherent to AM interact with dyes differently and affect the visual outcome compared to traditional textile fibers. Skillful manipulation of textile dyeing methods and dye type used for this research reduced the appearance of these lines to achieve consistency and desirable colour outcomes. In conclusion, integrating textile dyeing techniques into colouring 3D-printed polymer materials connects historical craftsmanship with innovative manufacturing. Overcoming challenges of colour distribution, compatibility, and layer line management requires a holistic approach that blends the technical consistency of AM with the artistic sensitivity of textile dyeing. Hence, applying textile dyeing methods to 3D-printed polymers opens new dimensions of aesthetic and functional possibilities.Keywords: polymer, 3D-printing, sublimation, textile, dyeing, additive manufacturing
Procedia PDF Downloads 66119 Mid-Winter Stratospheric Warming Effects on Equatorial Dynamics over Peninsular India
Authors: SHWETA SRIKUMAR
Abstract:
Winter stratospheric dynamics is a highly variable and spectacular field of research in middle atmosphere. It is well believed that the interaction of energetic planetary waves with mean flow causes the temperature to increase in the stratosphere and associated circulation reversal. This wave driven sudden disturbances in the polar stratosphere is defined as Sudden Stratospheric Warming. The main objective of the present work is to investigate the mid-winter major stratospheric warming events on equatorial dynamics over Peninsular India. To explore the effect of mid-winter stratospheric warming on Indian region (60oE -100oE), we have selected the winters 2003/04, 2005/06, 2008/09, 2012/13 and 2018/19. This study utilized the data from ERA-Interim Reanalysis, Outgoing Longwave Radiation (OLR) from NOAA and TRMM satellite data from NASA mission. It is observed that a sudden drop in OLR (averaged over Indian Region) occurs during the course of warming for the winters 2005/06, 2008/09 and 2018/19. But in winters 2003/04 and 2012/13, drop in OLR happens prior to the onset of major warming. Significant amplitude of planetary wave activity is observed in equatorial lower stratosphere which indicates the propagation of extra-tropical planetary waves from high latitude to equator. During the course of warming, a strong downward propagation of EP flux convergence is observed from polar to equator region. The polar westward wind reaches upto 20oN and the weak eastward wind dominates the equator during the winters 2003/04, 2005/06 and 2018/19. But in 2012/13 winter, polar westward wind reaches upto equator. The equatorial wind at 2008/09 is dominated by strong westward wind. Further detailed results will be presented in the conference.Keywords: Equatorial dynamics, Outgoing Longwave Radiation, Sudden Stratospheric Warming, Planetary Waves
Procedia PDF Downloads 141118 Ramp Rate and Constriction Factor Based Dual Objective Economic Load Dispatch Using Particle Swarm Optimization
Authors: Himanshu Shekhar Maharana, S. K .Dash
Abstract:
Economic Load Dispatch (ELD) proves to be a vital optimization process in electric power system for allocating generation amongst various units to compute the cost of generation, the cost of emission involving global warming gases like sulphur dioxide, nitrous oxide and carbon monoxide etc. In this dissertation, we emphasize ramp rate constriction factor based particle swarm optimization (RRCPSO) for analyzing various performance objectives, namely cost of generation, cost of emission, and a dual objective function involving both these objectives through the experimental simulated results. A 6-unit 30 bus IEEE test case system has been utilized for simulating the results involving improved weight factor advanced ramp rate limit constraints for optimizing total cost of generation and emission. This method increases the tendency of particles to venture into the solution space to ameliorate their convergence rates. Earlier works through dispersed PSO (DPSO) and constriction factor based PSO (CPSO) give rise to comparatively higher computational time and less good optimal solution at par with current dissertation. This paper deals with ramp rate and constriction factor based well defined ramp rate PSO to compute various objectives namely cost, emission and total objective etc. and compares the result with DPSO and weight improved PSO (WIPSO) techniques illustrating lesser computational time and better optimal solution.Keywords: economic load dispatch (ELD), constriction factor based particle swarm optimization (CPSO), dispersed particle swarm optimization (DPSO), weight improved particle swarm optimization (WIPSO), ramp rate and constriction factor based particle swarm optimization (RRCPSO)
Procedia PDF Downloads 379117 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks
Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton
Abstract:
Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions
Procedia PDF Downloads 80116 Flow Reproduction Using Vortex Particle Methods for Wake Buffeting Analysis of Bluff Structures
Authors: Samir Chawdhury, Guido Morgenthal
Abstract:
The paper presents a novel extension of Vortex Particle Methods (VPM) where the study aims to reproduce a template simulation of complex flow field that is generated from impulsively started flow past an upstream bluff body at certain Reynolds number Re-Vibration of a structural system under upstream wake flow is often considered its governing design criteria. Therefore, the attention is given in this study especially for the reproduction of wake flow simulation. The basic methodology for the implementation of the flow reproduction requires the downstream velocity sampling from the template flow simulation; therefore, at particular distances from the upstream section the instantaneous velocity components are sampled using a series of square sampling-cells arranged vertically where each of the cell contains four velocity sampling points at its corner. Since the grid free Lagrangian VPM algorithm discretises vorticity on particle elements, the method requires transformation of the velocity components into vortex circulation, and finally the simulation of the reproduction of the template flow field by seeding these vortex circulations or particles into a free stream flow. It is noteworthy that the vortex particles have to be released into the free stream exactly at same rate of velocity sampling. Studies have been done, specifically, in terms of different sampling rates and velocity sampling positions to find their effects on flow reproduction quality. The quality assessments are mainly done, using a downstream flow monitoring profile, by comparing the characteristic wind flow profiles using several statistical turbulence measures. Additionally, the comparisons are performed using velocity time histories, snapshots of the flow fields, and the vibration of a downstream bluff section by performing wake buffeting analyses of the section under the original and reproduced wake flows. Convergence study is performed for the validation of the method. The study also describes the possibilities how to achieve flow reproductions with less computational effort.Keywords: vortex particle method, wake flow, flow reproduction, wake buffeting analysis
Procedia PDF Downloads 310115 Robust Numerical Solution for Flow Problems
Authors: Gregor Kosec
Abstract:
Simple and robust numerical approach for solving flow problems is presented, where involved physical fields are represented through the local approximation functions, i.e., the considered field is approximated over a local support domain. The approximation functions are then used to evaluate the partial differential operators. The type of approximation, the size of support domain, and the type and number of basis function can be general. The solution procedure is formulated completely through local computational operations. Besides local numerical method also the pressure velocity is performed locally with retaining the correct temporal transient. The complete locality of the introduced numerical scheme has several beneficial effects. One of the most attractive is the simplicity since it could be understood as a generalized Finite Differences Method, however, much more powerful. Presented methodology offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or p-adaptivity to treat obscure anomalies in physical field. The stability versus computation complexity and accuracy can be regulated by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation. The presented methodology is relatively simple to understand and implement, which makes it potentially powerful tool for engineering simulations. Besides simplicity and straightforward implementation, there are many opportunities to fully exploit modern computer architectures through different parallel computing strategies. The performance of the method is presented on the lid driven cavity problem, backward facing step problem, de Vahl Davis natural convection test, extended also to low Prandtl fluid and Darcy porous flow. Results are presented in terms of velocity profiles, convergence plots, and stability analyses. Results of all cases are also compared against published data.Keywords: fluid flow, meshless, low Pr problem, natural convection
Procedia PDF Downloads 231114 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction
Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter
Abstract:
Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA
Procedia PDF Downloads 176113 Fully Coupled Porous Media Model
Authors: Nia Mair Fry, Matthew Profit, Chenfeng Li
Abstract:
This work focuses on the development and implementation of a fully implicit-implicit, coupled mechanical deformation and porous flow, finite element software tool. The fully implicit software accurately predicts classical fundamental analytical solutions such as the Terzaghi consolidation problem. Furthermore, it can capture other analytical solutions less well known in the literature, such as Gibson’s sedimentation rate problem and Coussy’s problems investigating wellbore stability for poroelastic rocks. The mechanical volume strains are transferred to the porous flow governing equation in an implicit framework. This will overcome some of the many current industrial issues, which use explicit solvers for the mechanical governing equations and only implicit solvers on the porous flow side. This can potentially lead to instability and non-convergence issues in the coupled system, plus giving results with an accountable degree of error. The specification of a fully monolithic implicit-implicit coupled porous media code sees the solution of both seepage-mechanical equations in one matrix system, under a unified time-stepping scheme, which makes the problem definition much easier. When using an explicit solver, additional input such as the damping coefficient and mass scaling factor is required, which are circumvented with a fully implicit solution. Further, improved accuracy is achieved as the solution is not dependent on predictor-corrector methods for the pore fluid pressure solution, but at the potential cost of reduced stability. In testing of this fully monolithic porous media code, there is the comparison of the fully implicit coupled scheme against an existing staggered explicit-implicit coupled scheme solution across a range of geotechnical problems. These cases include 1) Biot coefficient calculation, 2) consolidation theory with Terzaghi analytical solution, 3) sedimentation theory with Gibson analytical solution, and 4) Coussy well-bore poroelastic analytical solutions.Keywords: coupled, implicit, monolithic, porous media
Procedia PDF Downloads 137112 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning
Authors: Michael A. Sprayberry, Vincent C. Paquit
Abstract:
Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization
Procedia PDF Downloads 88111 Use of Social Media in Political Communications: Example of Facebook
Authors: Havva Nur Tarakci, Bahar Urhan Torun
Abstract:
The transformation that is seen in every area of life by technology, especially internet technology changes the structure of political communications too. Internet, which is at the top of new communication technologies, affects political communications with its structure in a way that no traditional communication tools ever have and enables interaction and the channel between receiver and sender, and it becomes one of the most effective tools preferred among the political communication applications. This state as a result of technological convergence makes Internet an unobtainable place for political communication campaigns. Political communications, which means every kind of communication strategies that political parties called 'actors of political communications' use with the aim of messaging their opinions and party programmes to their present and potential voters who are a target group for them, is a type of communication that is frequently used also among social media tools at the present day. The electorate consisting of different structures is informed, directed, and managed by social media tools. Political parties easily reach their electorate by these tools without any limitations of both time and place and also are able to take the opinions and reactions of their electorate by the element of interaction that is a feature of social media. In this context, Facebook, which is a place that political parties use in social media at most, is a communication network including in our daily life since 2004. As it is one of the most popular social networks today, it is among the most-visited websites in the global scale. In this way, the research is based on the question, “How do the political parties use Facebook at the campaigns, which they conduct during the election periods, for informing their voters?” and it aims at clarifying the Facebook using practices of the political parties. In direction of this objective the official Facebook accounts of the four political parties (JDP–AKParti, PDP–BDP, RPP-CHP, NMP-MHP), which reach their voters by social media besides other communication tools, are treated, and a frame for the politics of Turkey is formed. The time of examination is constricted with totally two weeks, one week before the mayoral elections and one week after the mayoral elections, when it is supposed that the political parties use their Facebook accounts in full swing. As a research method, the method of content analysis is preferred, and the texts and the visual elements that are gotten are interpreted based on this analysis.Keywords: Facebook, political communications, social media, electrorate
Procedia PDF Downloads 379110 A Theoretical Approach on Electoral Competition, Lobby Formation and Equilibrium Policy Platforms
Authors: Deepti Kohli, Meeta Keswani Mehra
Abstract:
The paper develops a theoretical model of electoral competition with purely opportunistic candidates and a uni-dimensional policy using the probability voting approach while focusing on the aspect of lobby formation to analyze the inherent complex interactions between centripetal and centrifugal forces and their effects on equilibrium policy platforms. There exist three types of agents, namely, Left-wing, Moderate and Right-wing who comprise of the total voting population. Also, it is assumed that the Left and Right agents are free to initiate a lobby of their choice. If initiated, these lobbies generate donations which in turn can be contributed to one (or both) electoral candidates in order to influence them to implement the lobby’s preferred policy. Four different lobby formation scenarios have been considered: no lobby formation, only Left, only Right and both Left and Right. The equilibrium policy platforms, amount of individual donations by agents to their respective lobbies and the contributions offered to the electoral candidates have been solved for under each of the above four cases. Since it is assumed that the agents cannot coordinate each other’s actions during the lobby formation stage, there exists a probability with which a lobby would be formed, which is also solved for in the model. The results indicate that the policy platforms of the two electoral candidates converge completely under the cases of no lobby and both (extreme) formations but diverge under the cases of only one (Left or Right) lobby formation. This is because in the case of no lobby being formed, only the centripetal forces (emerging from the election-winning aspect) are present while in the case of both extreme (Left-wing and Right-wing) lobbies being formed, centrifugal forces (emerging from the lobby formation aspect) also arise but cancel each other out, again resulting in a pure policy convergence phenomenon. In contrast, in case of only one lobby being formed, both centripetal and centrifugal forces interact strategically, leading the two electoral candidates to choose completely different policy platforms in equilibrium. Additionally, it is found that in equilibrium, while the donation by a specific agent type increases with the formation of both lobbies in comparison to when only one lobby is formed, the probability of implementation of the policy being advocated by that lobby group falls.Keywords: electoral competition, equilibrium policy platforms, lobby formation, opportunistic candidates
Procedia PDF Downloads 329109 Ill-Posed Inverse Problems in Molecular Imaging
Authors: Ranadhir Roy
Abstract:
Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method
Procedia PDF Downloads 269108 Threat Modeling Methodology for Supporting Industrial Control Systems Device Manufacturers and System Integrators
Authors: Raluca Ana Maria Viziteu, Anna Prudnikova
Abstract:
Industrial control systems (ICS) have received much attention in recent years due to the convergence of information technology (IT) and operational technology (OT) that has increased the interdependence of safety and security issues to be considered. These issues require ICS-tailored solutions. That led to the need to creation of a methodology for supporting ICS device manufacturers and system integrators in carrying out threat modeling of embedded ICS devices in a way that guarantees the quality of the identified threats and minimizes subjectivity in the threat identification process. To research, the possibility of creating such a methodology, a set of existing standards, regulations, papers, and publications related to threat modeling in the ICS sector and other sectors was reviewed to identify various existing methodologies and methods used in threat modeling. Furthermore, the most popular ones were tested in an exploratory phase on a specific PLC device. The outcome of this exploratory phase has been used as a basis for defining specific characteristics of ICS embedded devices and their deployment scenarios, identifying the factors that introduce subjectivity in the threat modeling process of such devices, and defining metrics for evaluating the minimum quality requirements of identified threats associated to the deployment of the devices in existing infrastructures. Furthermore, the threat modeling methodology was created based on the previous steps' results. The usability of the methodology was evaluated through a set of standardized threat modeling requirements and a standardized comparison method for threat modeling methodologies. The outcomes of these verification methods confirm that the methodology is effective. The full paper includes the outcome of research on different threat modeling methodologies that can be used in OT, their comparison, and the results of implementing each of them in practice on a PLC device. This research is further used to build a threat modeling methodology tailored to OT environments; a detailed description is included. Moreover, the paper includes results of the evaluation of created methodology based on a set of parameters specifically created to rate threat modeling methodologies.Keywords: device manufacturers, embedded devices, industrial control systems, threat modeling
Procedia PDF Downloads 78107 An Integral Sustainable Design Evaluation of the 15-Minute City and the Processes of Transferability to Cities of the Global South
Authors: Chitsanzo Isaac
Abstract:
Across the world, the ongoing Covid-19 pandemic has challenged urban systems and policy frameworks, highlighting societal vulnerabilities and systemic inequities among many communities. Measures of confinement and social distancing to contain the Covid-19 virus have fragmented the physical and social fabric of cities. This has caused urban dwellers to reassess how they engage with their urban surroundings and maintain social ties. Urbanists have presented strategies that would allow communities to survive and even thrive, in extraordinary times of crisis like the pandemic. Tactical Urbanism, particularly the 15-Minute City, has gained popularity. It is considered a resilient approach in the global north, however, it’s transferability to the global south has been called into question. To this end, this paper poses the question: to what extent is the 15-Minute City framework integral sustainable design, and are there processes that make it adoptable by cities in the global south? This paper explores four issues using secondary quantitative data analysis and convergence analysis in the Paris and Blantyre urban regions. First, it questions how the 15-Minute City has been defined and measured, and how it impacts urban dwellers. Second, it examines the extent to which the 15-minute city performs under the lens of frameworks such as Wilber’s integral theory and Fleming’s integral sustainable design theory. Thirdly this work examines the processes that can be transferred to developing cities which foster community resilience through the perspectives of experience, behaviors, cultures, and systems. Finally, it reviews the principal ways in which a multi-perspective reality can be the basis for resilient community design and sustainable urban development. This work will shed a light on the importance of a multi-perspective reality as a means of achieving sustainable urban design goals in developing urban areas.Keywords: 15-minute city, developing cities, global south, community resilience, integral sustainable design, systems thinking, complexity, tactical urbanism
Procedia PDF Downloads 147106 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton
Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani
Abstract:
Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton
Procedia PDF Downloads 323105 Surface Motion of Anisotropic Half Space Containing an Anisotropic Inclusion under SH Wave
Authors: Yuanda Ma, Zhiyong Zhang, Zailin Yang, Guanxixi Jiang
Abstract:
Anisotropy is very common in underground media, such as rock, sand, and soil. Hence, the dynamic response of anisotropy medium under elastic waves is significantly different from the isotropic one. Moreover, underground heterogeneities and structures, such as pipelines, cylinders, or tunnels, are usually made by composite materials, leading to the anisotropy of these heterogeneities and structures. Both the anisotropy of the underground medium and the heterogeneities have an effect on the surface motion of the ground. Aiming at providing theoretical references for earthquake engineering and seismology, the surface motion of anisotropic half-space with a cylindrical anisotropic inclusion embedded under the SH wave is investigated in this work. Considering the anisotropy of the underground medium, the governing equation with three elastic parameters of SH wave propagation is introduced. Then, based on the complex function method and multipolar coordinates system, the governing equation in the complex plane is obtained. With the help of a pair of transformation, the governing equation is transformed into a standard form. By means of the same methods, the governing equation of SH wave propagation in the cylindrical inclusion with another three elastic parameters is normalized as well. Subsequently, the scattering wave in the half-space and the standing wave in the inclusion is deduced. Different incident wave angle and anisotropy are considered to obtain the reflected wave. Then the unknown coefficients in scattering wave and standing wave are solved by utilizing the continuous condition at the boundary of the inclusion. Through truncating finite terms of the scattering wave and standing wave, the equation of boundary conditions can be calculated by programs. After verifying the convergence and the precision of the calculation, the validity of the calculation is verified by degrading the model of the problem as well. Some parameters which influence the surface displacement of the half-space is considered: dimensionless wave number, dimensionless depth of the inclusion, anisotropic parameters, wave number ratio, shear modulus ratio. Finally, surface displacement amplitude of the half space with different parameters is calculated and discussed.Keywords: anisotropy, complex function method, sh wave, surface displacement amplitude
Procedia PDF Downloads 118104 Formulating a Definition of Hate Speech: From Divergence to Convergence
Authors: Avitus A. Agbor
Abstract:
Numerous incidents, ranging from trivial to catastrophic, do come to mind when one reflects on hate. The victims of these belong to specific identifiable groups within communities. These experiences evoke discussions on Islamophobia, xenophobia, homophobia, anti-Semitism, racism, ethnic hatred, atheism, and other brutal forms of bigotry. Common to all these is an invisible but portent force that drives all of them: hatred. Such hatred is usually fueled by a profound degree of intolerance (to diversity) and the zeal to impose on others their beliefs and practices which they consider to be the conventional norm. More importantly, the perpetuation of these hateful acts is the unfortunate outcome of an overplay of invectives and hate speech which, to a greater extent, cannot be divorced from hate. From a legal perspective, acknowledging the existence of an undeniable link between hate speech and hate is quite easy. However, both within and without legal scholarship, the notion of “hate speech” remains a conundrum: a phrase that is quite easily explained through experiences than propounding a watertight definition that captures the entire essence and nature of what it is. The problem is further compounded by a few factors: first, within the international human rights framework, the notion of hate speech is not used. In limiting the right to freedom of expression, the ICCPR simply excludes specific kinds of speeches (but does not refer to them as hate speech). Regional human rights instruments are not so different, except for the subsequent developments that took place in the European Union in which the notion has been carefully delineated, and now a much clearer picture of what constitutes hate speech is provided. The legal architecture in domestic legal systems clearly shows differences in approaches and regulation: making it more difficult. In short, what may be hate speech in one legal system may very well be acceptable legal speech in another legal system. Lastly, the cornucopia of academic voices on the issue of hate speech exude the divergence thereon. Yet, in the absence of a well-formulated and universally acceptable definition, it is important to consider how hate speech can be defined. Taking an evidence-based approach, this research looks into the issue of defining hate speech in legal scholarship and how and why such a formulation is of critical importance in the prohibition and prosecution of hate speech.Keywords: hate speech, international human rights law, international criminal law, freedom of expression
Procedia PDF Downloads 72103 CFD Simulation of the Pressure Distribution in the Upper Airway of an Obstructive Sleep Apnea Patient
Authors: Christina Hagen, Pragathi Kamale Gurmurthy, Thorsten M. Buzug
Abstract:
CFD simulations are performed in the upper airway of a patient suffering from obstructive sleep apnea (OSA) that is a sleep related breathing disorder characterized by repetitive partial or complete closures of the upper airways. The simulations are aimed at getting a better understanding of the pathophysiological flow patterns in an OSA patient. The simulation is compared to medical data of a sleep endoscopic examination under sedation. A digital model consisting of surface triangles of the upper airway is extracted from the MR images by a region growing segmentation process and is followed by a careful manual refinement. The computational domain includes the nasal cavity with the nostrils as the inlet areas and the pharyngeal volume with an outlet underneath the larynx. At the nostrils a flat inflow velocity profile is prescribed by choosing the velocity such that a volume flow rate of 150 ml/s is reached. Behind the larynx at the outlet a pressure of -10 Pa is prescribed. The stationary incompressible Navier-Stokes equations are numerically solved using finite elements. A grid convergence study has been performed. The results show an amplification of the maximal velocity of about 2.5 times the inlet velocity at a constriction of the pharyngeal volume in the area of the tongue. It is the same region that also shows the highest pressure drop from about 5 Pa. This is in agreement with the sleep endoscopic examinations of the same patient under sedation showing complete contractions in the area of the tongue. CFD simulations can become a useful tool in the diagnosis and therapy of obstructive sleep apnea by giving insight into the patient’s individual fluid dynamical situation in the upper airways giving a better understanding of the disease where experimental measurements are not feasible. Within this study, it could been shown on one hand that constriction areas within the upper airway lead to a significant pressure drop and on the other hand a good agreement of the area of pressure drop and the area of contraction could be shown.Keywords: biomedical engineering, obstructive sleep apnea, pharynx, upper airways
Procedia PDF Downloads 305