Search results for: monotonic convergence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 657

Search results for: monotonic convergence

117 The Colouration of Additive-Manufactured Polymer

Authors: Abisuga Oluwayemisi Adebola, Kerri Akiwowo, Deon de Beer, Kobus Van Der Walt

Abstract:

The convergence of additive manufacturing (AM) and traditional textile dyeing techniques has initiated innovative possibilities for improving the visual application and customization potential of 3D-printed polymer objects. Textile dyeing techniques have progressed to transform fabrics with vibrant colours and complex patterns over centuries. The layer-by-layer deposition characteristic of AM necessitates adaptations in dye application methods to ensure even colour penetration across complex surfaces. Compatibility between dye formulations and polymer matrices influences colour uptake and stability, demanding careful selection and testing of dyes for optimal results. This study investigates the development interaction between these areas, revealing the challenges and opportunities of applying textile dyeing methods to colour 3D-printed polymer materials. The method explores three innovative approaches to colour the 3D-printed polymer object: (a) Additive Manufacturing of a Prototype, (b) the traditional dyebath method, and (c) the contemporary digital sublimation technique. The results show that the layer lines inherent to AM interact with dyes differently and affect the visual outcome compared to traditional textile fibers. Skillful manipulation of textile dyeing methods and dye type used for this research reduced the appearance of these lines to achieve consistency and desirable colour outcomes. In conclusion, integrating textile dyeing techniques into colouring 3D-printed polymer materials connects historical craftsmanship with innovative manufacturing. Overcoming challenges of colour distribution, compatibility, and layer line management requires a holistic approach that blends the technical consistency of AM with the artistic sensitivity of textile dyeing. Hence, applying textile dyeing methods to 3D-printed polymers opens new dimensions of aesthetic and functional possibilities.

Keywords: polymer, 3D-printing, sublimation, textile, dyeing, additive manufacturing

Procedia PDF Downloads 49
116 Mid-Winter Stratospheric Warming Effects on Equatorial Dynamics over Peninsular India

Authors: SHWETA SRIKUMAR

Abstract:

Winter stratospheric dynamics is a highly variable and spectacular field of research in middle atmosphere. It is well believed that the interaction of energetic planetary waves with mean flow causes the temperature to increase in the stratosphere and associated circulation reversal. This wave driven sudden disturbances in the polar stratosphere is defined as Sudden Stratospheric Warming. The main objective of the present work is to investigate the mid-winter major stratospheric warming events on equatorial dynamics over Peninsular India. To explore the effect of mid-winter stratospheric warming on Indian region (60oE -100oE), we have selected the winters 2003/04, 2005/06, 2008/09, 2012/13 and 2018/19. This study utilized the data from ERA-Interim Reanalysis, Outgoing Longwave Radiation (OLR) from NOAA and TRMM satellite data from NASA mission. It is observed that a sudden drop in OLR (averaged over Indian Region) occurs during the course of warming for the winters 2005/06, 2008/09 and 2018/19. But in winters 2003/04 and 2012/13, drop in OLR happens prior to the onset of major warming. Significant amplitude of planetary wave activity is observed in equatorial lower stratosphere which indicates the propagation of extra-tropical planetary waves from high latitude to equator. During the course of warming, a strong downward propagation of EP flux convergence is observed from polar to equator region. The polar westward wind reaches upto 20oN and the weak eastward wind dominates the equator during the winters 2003/04, 2005/06 and 2018/19. But in 2012/13 winter, polar westward wind reaches upto equator. The equatorial wind at 2008/09 is dominated by strong westward wind. Further detailed results will be presented in the conference.

Keywords: Equatorial dynamics, Outgoing Longwave Radiation, Sudden Stratospheric Warming, Planetary Waves

Procedia PDF Downloads 120
115 Ramp Rate and Constriction Factor Based Dual Objective Economic Load Dispatch Using Particle Swarm Optimization

Authors: Himanshu Shekhar Maharana, S. K .Dash

Abstract:

Economic Load Dispatch (ELD) proves to be a vital optimization process in electric power system for allocating generation amongst various units to compute the cost of generation, the cost of emission involving global warming gases like sulphur dioxide, nitrous oxide and carbon monoxide etc. In this dissertation, we emphasize ramp rate constriction factor based particle swarm optimization (RRCPSO) for analyzing various performance objectives, namely cost of generation, cost of emission, and a dual objective function involving both these objectives through the experimental simulated results. A 6-unit 30 bus IEEE test case system has been utilized for simulating the results involving improved weight factor advanced ramp rate limit constraints for optimizing total cost of generation and emission. This method increases the tendency of particles to venture into the solution space to ameliorate their convergence rates. Earlier works through dispersed PSO (DPSO) and constriction factor based PSO (CPSO) give rise to comparatively higher computational time and less good optimal solution at par with current dissertation. This paper deals with ramp rate and constriction factor based well defined ramp rate PSO to compute various objectives namely cost, emission and total objective etc. and compares the result with DPSO and weight improved PSO (WIPSO) techniques illustrating lesser computational time and better optimal solution. 

Keywords: economic load dispatch (ELD), constriction factor based particle swarm optimization (CPSO), dispersed particle swarm optimization (DPSO), weight improved particle swarm optimization (WIPSO), ramp rate and constriction factor based particle swarm optimization (RRCPSO)

Procedia PDF Downloads 357
114 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks

Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton

Abstract:

Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.

Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions

Procedia PDF Downloads 58
113 Flow Reproduction Using Vortex Particle Methods for Wake Buffeting Analysis of Bluff Structures

Authors: Samir Chawdhury, Guido Morgenthal

Abstract:

The paper presents a novel extension of Vortex Particle Methods (VPM) where the study aims to reproduce a template simulation of complex flow field that is generated from impulsively started flow past an upstream bluff body at certain Reynolds number Re-Vibration of a structural system under upstream wake flow is often considered its governing design criteria. Therefore, the attention is given in this study especially for the reproduction of wake flow simulation. The basic methodology for the implementation of the flow reproduction requires the downstream velocity sampling from the template flow simulation; therefore, at particular distances from the upstream section the instantaneous velocity components are sampled using a series of square sampling-cells arranged vertically where each of the cell contains four velocity sampling points at its corner. Since the grid free Lagrangian VPM algorithm discretises vorticity on particle elements, the method requires transformation of the velocity components into vortex circulation, and finally the simulation of the reproduction of the template flow field by seeding these vortex circulations or particles into a free stream flow. It is noteworthy that the vortex particles have to be released into the free stream exactly at same rate of velocity sampling. Studies have been done, specifically, in terms of different sampling rates and velocity sampling positions to find their effects on flow reproduction quality. The quality assessments are mainly done, using a downstream flow monitoring profile, by comparing the characteristic wind flow profiles using several statistical turbulence measures. Additionally, the comparisons are performed using velocity time histories, snapshots of the flow fields, and the vibration of a downstream bluff section by performing wake buffeting analyses of the section under the original and reproduced wake flows. Convergence study is performed for the validation of the method. The study also describes the possibilities how to achieve flow reproductions with less computational effort.

Keywords: vortex particle method, wake flow, flow reproduction, wake buffeting analysis

Procedia PDF Downloads 287
112 Socio-Economic Analysis of Child Homelessness in South Africa: Implication for Policy Making

Authors: Chigozie Azunna, Lucius Botes

Abstract:

Child homelessness is one of the critical challenges in South Africa in recent times. While children constitute a very significant demography of South Africa, child homeless broods a socio-economic crisis of varying magnitude. In order for the UN 2030 Agenda for Sustainable Development Goals (SDGs) to be achieved, especially with respect to equality and non-discrimination, child homeless needs to be effectively curbed. This will, in turn, contribute positively to the economic derivative of South Africa’s dynamic demography. Methodically, this research employed content analysis involving a comprehensive review of existing literature on child homelessness in South Africa. The study found a convergence of national policy and international agenda on curbing child homelessness in South Africa in spite of the following is a representation of the statistics of child homelessness in South Africa: in the metros, homelessness was (74,1%), while in non-metro is at 25.9%. The figure shows that the City of Tshwane has the highest number of homeless persons (18,1%), followed by the City of Johannesburg (15,6%) while the Nelson Mandela Bay metropolitan has the lowest percentage of homeless persons (2,7%). The study concluded that despite the existence of national policy options, child homeless has remained persistent. Worse still, there is a lack of accurate data, while the Covid pandemic, poverty, economic challenges, the HIV/AIDS epidemic, and implementation gaps in government policies have contributed to the widening challenge. The implication is severe, affecting children's physical and emotional health, education, and future opportunities. The study recommends strengthening actionable policies in addressing child homelessness by bridging the urban-rural gap as well as providing an intra-community network that ensures that child homelessness is effectively addressed, including addressing those manifold challenges like access to education and susceptibility to diseases and vulnerability of the homeless child.

Keywords: South Africa, child, homeless, SDGs, covid, urban, rural

Procedia PDF Downloads 15
111 Robust Numerical Solution for Flow Problems

Authors: Gregor Kosec

Abstract:

Simple and robust numerical approach for solving flow problems is presented, where involved physical fields are represented through the local approximation functions, i.e., the considered field is approximated over a local support domain. The approximation functions are then used to evaluate the partial differential operators. The type of approximation, the size of support domain, and the type and number of basis function can be general. The solution procedure is formulated completely through local computational operations. Besides local numerical method also the pressure velocity is performed locally with retaining the correct temporal transient. The complete locality of the introduced numerical scheme has several beneficial effects. One of the most attractive is the simplicity since it could be understood as a generalized Finite Differences Method, however, much more powerful. Presented methodology offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or p-adaptivity to treat obscure anomalies in physical field. The stability versus computation complexity and accuracy can be regulated by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation. The presented methodology is relatively simple to understand and implement, which makes it potentially powerful tool for engineering simulations. Besides simplicity and straightforward implementation, there are many opportunities to fully exploit modern computer architectures through different parallel computing strategies. The performance of the method is presented on the lid driven cavity problem, backward facing step problem, de Vahl Davis natural convection test, extended also to low Prandtl fluid and Darcy porous flow. Results are presented in terms of velocity profiles, convergence plots, and stability analyses. Results of all cases are also compared against published data.

Keywords: fluid flow, meshless, low Pr problem, natural convection

Procedia PDF Downloads 211
110 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction

Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter

Abstract:

Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.

Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA

Procedia PDF Downloads 147
109 Fully Coupled Porous Media Model

Authors: Nia Mair Fry, Matthew Profit, Chenfeng Li

Abstract:

This work focuses on the development and implementation of a fully implicit-implicit, coupled mechanical deformation and porous flow, finite element software tool. The fully implicit software accurately predicts classical fundamental analytical solutions such as the Terzaghi consolidation problem. Furthermore, it can capture other analytical solutions less well known in the literature, such as Gibson’s sedimentation rate problem and Coussy’s problems investigating wellbore stability for poroelastic rocks. The mechanical volume strains are transferred to the porous flow governing equation in an implicit framework. This will overcome some of the many current industrial issues, which use explicit solvers for the mechanical governing equations and only implicit solvers on the porous flow side. This can potentially lead to instability and non-convergence issues in the coupled system, plus giving results with an accountable degree of error. The specification of a fully monolithic implicit-implicit coupled porous media code sees the solution of both seepage-mechanical equations in one matrix system, under a unified time-stepping scheme, which makes the problem definition much easier. When using an explicit solver, additional input such as the damping coefficient and mass scaling factor is required, which are circumvented with a fully implicit solution. Further, improved accuracy is achieved as the solution is not dependent on predictor-corrector methods for the pore fluid pressure solution, but at the potential cost of reduced stability. In testing of this fully monolithic porous media code, there is the comparison of the fully implicit coupled scheme against an existing staggered explicit-implicit coupled scheme solution across a range of geotechnical problems. These cases include 1) Biot coefficient calculation, 2) consolidation theory with Terzaghi analytical solution, 3) sedimentation theory with Gibson analytical solution, and 4) Coussy well-bore poroelastic analytical solutions.

Keywords: coupled, implicit, monolithic, porous media

Procedia PDF Downloads 114
108 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning

Authors: Michael A. Sprayberry, Vincent C. Paquit

Abstract:

Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.

Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization

Procedia PDF Downloads 65
107 Use of Social Media in Political Communications: Example of Facebook

Authors: Havva Nur Tarakci, Bahar Urhan Torun

Abstract:

The transformation that is seen in every area of life by technology, especially internet technology changes the structure of political communications too. Internet, which is at the top of new communication technologies, affects political communications with its structure in a way that no traditional communication tools ever have and enables interaction and the channel between receiver and sender, and it becomes one of the most effective tools preferred among the political communication applications. This state as a result of technological convergence makes Internet an unobtainable place for political communication campaigns. Political communications, which means every kind of communication strategies that political parties called 'actors of political communications' use with the aim of messaging their opinions and party programmes to their present and potential voters who are a target group for them, is a type of communication that is frequently used also among social media tools at the present day. The electorate consisting of different structures is informed, directed, and managed by social media tools. Political parties easily reach their electorate by these tools without any limitations of both time and place and also are able to take the opinions and reactions of their electorate by the element of interaction that is a feature of social media. In this context, Facebook, which is a place that political parties use in social media at most, is a communication network including in our daily life since 2004. As it is one of the most popular social networks today, it is among the most-visited websites in the global scale. In this way, the research is based on the question, “How do the political parties use Facebook at the campaigns, which they conduct during the election periods, for informing their voters?” and it aims at clarifying the Facebook using practices of the political parties. In direction of this objective the official Facebook accounts of the four political parties (JDP–AKParti, PDP–BDP, RPP-CHP, NMP-MHP), which reach their voters by social media besides other communication tools, are treated, and a frame for the politics of Turkey is formed. The time of examination is constricted with totally two weeks, one week before the mayoral elections and one week after the mayoral elections, when it is supposed that the political parties use their Facebook accounts in full swing. As a research method, the method of content analysis is preferred, and the texts and the visual elements that are gotten are interpreted based on this analysis.

Keywords: Facebook, political communications, social media, electrorate

Procedia PDF Downloads 358
106 A Theoretical Approach on Electoral Competition, Lobby Formation and Equilibrium Policy Platforms

Authors: Deepti Kohli, Meeta Keswani Mehra

Abstract:

The paper develops a theoretical model of electoral competition with purely opportunistic candidates and a uni-dimensional policy using the probability voting approach while focusing on the aspect of lobby formation to analyze the inherent complex interactions between centripetal and centrifugal forces and their effects on equilibrium policy platforms. There exist three types of agents, namely, Left-wing, Moderate and Right-wing who comprise of the total voting population. Also, it is assumed that the Left and Right agents are free to initiate a lobby of their choice. If initiated, these lobbies generate donations which in turn can be contributed to one (or both) electoral candidates in order to influence them to implement the lobby’s preferred policy. Four different lobby formation scenarios have been considered: no lobby formation, only Left, only Right and both Left and Right. The equilibrium policy platforms, amount of individual donations by agents to their respective lobbies and the contributions offered to the electoral candidates have been solved for under each of the above four cases. Since it is assumed that the agents cannot coordinate each other’s actions during the lobby formation stage, there exists a probability with which a lobby would be formed, which is also solved for in the model. The results indicate that the policy platforms of the two electoral candidates converge completely under the cases of no lobby and both (extreme) formations but diverge under the cases of only one (Left or Right) lobby formation. This is because in the case of no lobby being formed, only the centripetal forces (emerging from the election-winning aspect) are present while in the case of both extreme (Left-wing and Right-wing) lobbies being formed, centrifugal forces (emerging from the lobby formation aspect) also arise but cancel each other out, again resulting in a pure policy convergence phenomenon. In contrast, in case of only one lobby being formed, both centripetal and centrifugal forces interact strategically, leading the two electoral candidates to choose completely different policy platforms in equilibrium. Additionally, it is found that in equilibrium, while the donation by a specific agent type increases with the formation of both lobbies in comparison to when only one lobby is formed, the probability of implementation of the policy being advocated by that lobby group falls.

Keywords: electoral competition, equilibrium policy platforms, lobby formation, opportunistic candidates

Procedia PDF Downloads 311
105 Ill-Posed Inverse Problems in Molecular Imaging

Authors: Ranadhir Roy

Abstract:

Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.

Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method

Procedia PDF Downloads 251
104 Threat Modeling Methodology for Supporting Industrial Control Systems Device Manufacturers and System Integrators

Authors: Raluca Ana Maria Viziteu, Anna Prudnikova

Abstract:

Industrial control systems (ICS) have received much attention in recent years due to the convergence of information technology (IT) and operational technology (OT) that has increased the interdependence of safety and security issues to be considered. These issues require ICS-tailored solutions. That led to the need to creation of a methodology for supporting ICS device manufacturers and system integrators in carrying out threat modeling of embedded ICS devices in a way that guarantees the quality of the identified threats and minimizes subjectivity in the threat identification process. To research, the possibility of creating such a methodology, a set of existing standards, regulations, papers, and publications related to threat modeling in the ICS sector and other sectors was reviewed to identify various existing methodologies and methods used in threat modeling. Furthermore, the most popular ones were tested in an exploratory phase on a specific PLC device. The outcome of this exploratory phase has been used as a basis for defining specific characteristics of ICS embedded devices and their deployment scenarios, identifying the factors that introduce subjectivity in the threat modeling process of such devices, and defining metrics for evaluating the minimum quality requirements of identified threats associated to the deployment of the devices in existing infrastructures. Furthermore, the threat modeling methodology was created based on the previous steps' results. The usability of the methodology was evaluated through a set of standardized threat modeling requirements and a standardized comparison method for threat modeling methodologies. The outcomes of these verification methods confirm that the methodology is effective. The full paper includes the outcome of research on different threat modeling methodologies that can be used in OT, their comparison, and the results of implementing each of them in practice on a PLC device. This research is further used to build a threat modeling methodology tailored to OT environments; a detailed description is included. Moreover, the paper includes results of the evaluation of created methodology based on a set of parameters specifically created to rate threat modeling methodologies.

Keywords: device manufacturers, embedded devices, industrial control systems, threat modeling

Procedia PDF Downloads 58
103 An Integral Sustainable Design Evaluation of the 15-Minute City and the Processes of Transferability to Cities of the Global South

Authors: Chitsanzo Isaac

Abstract:

Across the world, the ongoing Covid-19 pandemic has challenged urban systems and policy frameworks, highlighting societal vulnerabilities and systemic inequities among many communities. Measures of confinement and social distancing to contain the Covid-19 virus have fragmented the physical and social fabric of cities. This has caused urban dwellers to reassess how they engage with their urban surroundings and maintain social ties. Urbanists have presented strategies that would allow communities to survive and even thrive, in extraordinary times of crisis like the pandemic. Tactical Urbanism, particularly the 15-Minute City, has gained popularity. It is considered a resilient approach in the global north, however, it’s transferability to the global south has been called into question. To this end, this paper poses the question: to what extent is the 15-Minute City framework integral sustainable design, and are there processes that make it adoptable by cities in the global south? This paper explores four issues using secondary quantitative data analysis and convergence analysis in the Paris and Blantyre urban regions. First, it questions how the 15-Minute City has been defined and measured, and how it impacts urban dwellers. Second, it examines the extent to which the 15-minute city performs under the lens of frameworks such as Wilber’s integral theory and Fleming’s integral sustainable design theory. Thirdly this work examines the processes that can be transferred to developing cities which foster community resilience through the perspectives of experience, behaviors, cultures, and systems. Finally, it reviews the principal ways in which a multi-perspective reality can be the basis for resilient community design and sustainable urban development. This work will shed a light on the importance of a multi-perspective reality as a means of achieving sustainable urban design goals in developing urban areas.

Keywords: 15-minute city, developing cities, global south, community resilience, integral sustainable design, systems thinking, complexity, tactical urbanism

Procedia PDF Downloads 118
102 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 299
101 Surface Motion of Anisotropic Half Space Containing an Anisotropic Inclusion under SH Wave

Authors: Yuanda Ma, Zhiyong Zhang, Zailin Yang, Guanxixi Jiang

Abstract:

Anisotropy is very common in underground media, such as rock, sand, and soil. Hence, the dynamic response of anisotropy medium under elastic waves is significantly different from the isotropic one. Moreover, underground heterogeneities and structures, such as pipelines, cylinders, or tunnels, are usually made by composite materials, leading to the anisotropy of these heterogeneities and structures. Both the anisotropy of the underground medium and the heterogeneities have an effect on the surface motion of the ground. Aiming at providing theoretical references for earthquake engineering and seismology, the surface motion of anisotropic half-space with a cylindrical anisotropic inclusion embedded under the SH wave is investigated in this work. Considering the anisotropy of the underground medium, the governing equation with three elastic parameters of SH wave propagation is introduced. Then, based on the complex function method and multipolar coordinates system, the governing equation in the complex plane is obtained. With the help of a pair of transformation, the governing equation is transformed into a standard form. By means of the same methods, the governing equation of SH wave propagation in the cylindrical inclusion with another three elastic parameters is normalized as well. Subsequently, the scattering wave in the half-space and the standing wave in the inclusion is deduced. Different incident wave angle and anisotropy are considered to obtain the reflected wave. Then the unknown coefficients in scattering wave and standing wave are solved by utilizing the continuous condition at the boundary of the inclusion. Through truncating finite terms of the scattering wave and standing wave, the equation of boundary conditions can be calculated by programs. After verifying the convergence and the precision of the calculation, the validity of the calculation is verified by degrading the model of the problem as well. Some parameters which influence the surface displacement of the half-space is considered: dimensionless wave number, dimensionless depth of the inclusion, anisotropic parameters, wave number ratio, shear modulus ratio. Finally, surface displacement amplitude of the half space with different parameters is calculated and discussed.

Keywords: anisotropy, complex function method, sh wave, surface displacement amplitude

Procedia PDF Downloads 98
100 Formulating a Definition of Hate Speech: From Divergence to Convergence

Authors: Avitus A. Agbor

Abstract:

Numerous incidents, ranging from trivial to catastrophic, do come to mind when one reflects on hate. The victims of these belong to specific identifiable groups within communities. These experiences evoke discussions on Islamophobia, xenophobia, homophobia, anti-Semitism, racism, ethnic hatred, atheism, and other brutal forms of bigotry. Common to all these is an invisible but portent force that drives all of them: hatred. Such hatred is usually fueled by a profound degree of intolerance (to diversity) and the zeal to impose on others their beliefs and practices which they consider to be the conventional norm. More importantly, the perpetuation of these hateful acts is the unfortunate outcome of an overplay of invectives and hate speech which, to a greater extent, cannot be divorced from hate. From a legal perspective, acknowledging the existence of an undeniable link between hate speech and hate is quite easy. However, both within and without legal scholarship, the notion of “hate speech” remains a conundrum: a phrase that is quite easily explained through experiences than propounding a watertight definition that captures the entire essence and nature of what it is. The problem is further compounded by a few factors: first, within the international human rights framework, the notion of hate speech is not used. In limiting the right to freedom of expression, the ICCPR simply excludes specific kinds of speeches (but does not refer to them as hate speech). Regional human rights instruments are not so different, except for the subsequent developments that took place in the European Union in which the notion has been carefully delineated, and now a much clearer picture of what constitutes hate speech is provided. The legal architecture in domestic legal systems clearly shows differences in approaches and regulation: making it more difficult. In short, what may be hate speech in one legal system may very well be acceptable legal speech in another legal system. Lastly, the cornucopia of academic voices on the issue of hate speech exude the divergence thereon. Yet, in the absence of a well-formulated and universally acceptable definition, it is important to consider how hate speech can be defined. Taking an evidence-based approach, this research looks into the issue of defining hate speech in legal scholarship and how and why such a formulation is of critical importance in the prohibition and prosecution of hate speech.

Keywords: hate speech, international human rights law, international criminal law, freedom of expression

Procedia PDF Downloads 45
99 CFD Simulation of the Pressure Distribution in the Upper Airway of an Obstructive Sleep Apnea Patient

Authors: Christina Hagen, Pragathi Kamale Gurmurthy, Thorsten M. Buzug

Abstract:

CFD simulations are performed in the upper airway of a patient suffering from obstructive sleep apnea (OSA) that is a sleep related breathing disorder characterized by repetitive partial or complete closures of the upper airways. The simulations are aimed at getting a better understanding of the pathophysiological flow patterns in an OSA patient. The simulation is compared to medical data of a sleep endoscopic examination under sedation. A digital model consisting of surface triangles of the upper airway is extracted from the MR images by a region growing segmentation process and is followed by a careful manual refinement. The computational domain includes the nasal cavity with the nostrils as the inlet areas and the pharyngeal volume with an outlet underneath the larynx. At the nostrils a flat inflow velocity profile is prescribed by choosing the velocity such that a volume flow rate of 150 ml/s is reached. Behind the larynx at the outlet a pressure of -10 Pa is prescribed. The stationary incompressible Navier-Stokes equations are numerically solved using finite elements. A grid convergence study has been performed. The results show an amplification of the maximal velocity of about 2.5 times the inlet velocity at a constriction of the pharyngeal volume in the area of the tongue. It is the same region that also shows the highest pressure drop from about 5 Pa. This is in agreement with the sleep endoscopic examinations of the same patient under sedation showing complete contractions in the area of the tongue. CFD simulations can become a useful tool in the diagnosis and therapy of obstructive sleep apnea by giving insight into the patient’s individual fluid dynamical situation in the upper airways giving a better understanding of the disease where experimental measurements are not feasible. Within this study, it could been shown on one hand that constriction areas within the upper airway lead to a significant pressure drop and on the other hand a good agreement of the area of pressure drop and the area of contraction could be shown.

Keywords: biomedical engineering, obstructive sleep apnea, pharynx, upper airways

Procedia PDF Downloads 282
98 Four-Electron Auger Process for Hollow Ions

Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola

Abstract:

A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.

Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method

Procedia PDF Downloads 131
97 Comparison of Extended Kalman Filter and Unscented Kalman Filter for Autonomous Orbit Determination of Lagrangian Navigation Constellation

Authors: Youtao Gao, Bingyu Jin, Tanran Zhao, Bo Xu

Abstract:

The history of satellite navigation can be dated back to the 1960s. From the U.S. Transit system and the Russian Tsikada system to the modern Global Positioning System (GPS) and the Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS), performance of satellite navigation has been greatly improved. Nowadays, the navigation accuracy and coverage of these existing systems have already fully fulfilled the requirement of near-Earth users, but these systems are still beyond the reach of deep space targets. Due to the renewed interest in space exploration, a novel high-precision satellite navigation system is becoming even more important. The increasing demand for such a deep space navigation system has contributed to the emergence of a variety of new constellation architectures, such as the Lunar Global Positioning System. Apart from a Walker constellation which is similar to the one adopted by GPS on Earth, a novel constellation architecture which consists of libration point satellites in the Earth-Moon system is also available to construct the lunar navigation system, which can be called accordingly, the libration point satellite navigation system. The concept of using Earth-Moon libration point satellites for lunar navigation was first proposed by Farquhar and then followed by many other researchers. Moreover, due to the special characteristics of Libration point orbits, an autonomous orbit determination technique, which is called ‘Liaison navigation’, can be adopted by the libration point satellites. Using only scalar satellite-to-satellite tracking data, both the orbits of the user and libration point satellites can be determined autonomously. In this way, the extensive Earth-based tracking measurement can be eliminated, and an autonomous satellite navigation system can be developed for future space exploration missions. The method of state estimate is an unnegligible factor which impacts on the orbit determination accuracy besides type of orbit, initial state accuracy and measurement accuracy. We apply the extended Kalman filter(EKF) and the unscented Kalman filter(UKF) to determinate the orbits of Lagrangian navigation satellites. The autonomous orbit determination errors are compared. The simulation results illustrate that UKF can improve the accuracy and z-axis convergence to some extent.

Keywords: extended Kalman filter, autonomous orbit determination, unscented Kalman filter, navigation constellation

Procedia PDF Downloads 262
96 Computational Fluid Dynamics Simulation of Turbulent Convective Heat Transfer in Rectangular Mini-Channels for Rocket Cooling Applications

Authors: O. Anwar Beg, Armghan Zubair, Sireetorn Kuharat, Meisam Babaie

Abstract:

In this work, motivated by rocket channel cooling applications, we describe recent CFD simulations of turbulent convective heat transfer in mini-channels at different aspect ratios. ANSYS FLUENT software has been employed with a mean average error of 5.97% relative to Forrest’s MIT cooling channel study (2014) at a Reynolds number of 50,443 with a Prandtl number of 3.01. This suggests that the simulation model created for turbulent flow was suitable to set as a foundation for the study of different aspect ratios in the channel. Multiple aspect ratios were also considered to understand the influence of high aspect ratios to analyse the best performing cooling channel, which was determined to be the highest aspect ratio channels. Hence, the approximate 28:1 aspect ratio provided the best characteristics to ensure effective cooling. A mesh convergence study was performed to assess the optimum mesh density to collect accurate results. Hence, for this study an element size of 0.05mm was used to generate 579,120 for proper turbulent flow simulation. Deploying a greater bias factor would increase the mesh density to the furthest edges of the channel which would prove to be useful if the focus of the study was just on a single side of the wall. Since a bulk temperature is involved with the calculations, it is essential to ensure a suitable bias factor is used to ensure the reliability of the results. Hence, in this study we have opted to use a bias factor of 5 to allow greater mesh density at both edges of the channel. However, the limitations on mesh density and hardware have curtailed the sophistication achievable for the turbulence characteristics. Also only linear rectangular channels were considered, i.e. curvature was ignored. Furthermore, we only considered conventional water coolant. From this CFD study the variation of aspect ratio provided a deeper appreciation of the effect of small to high aspect ratios with regard to cooling channels. Hence, when considering an application for the channel, the geometry of the aspect ratio must play a crucial role in optimizing cooling performance.

Keywords: rocket channel cooling, ANSYS FLUENT CFD, turbulence, convection heat transfer

Procedia PDF Downloads 127
95 Multi-Indicator Evaluation of Agricultural Drought Trends in Ethiopia: Implications for Dry Land Agriculture and Food Security

Authors: Dawd Ahmed, Venkatesh Uddameri

Abstract:

Agriculture in Ethiopia is the main economic sector influenced by agricultural drought. A simultaneous assessment of drought trends using multiple drought indicators is useful for drought planning and management. Intra-season and seasonal drought trends in Ethiopia were studied using a suite of drought indicators. Standardized Precipitation Index (SPI), Standardized Precipitation Evapotranspiration Index (SPEI), Palmer Drought Severity Index (PDSI), and Z-index for long-rainy, dry, and short-rainy seasons are used to identify drought-causing mechanisms. The Statistical software package R version 3.5.2 was used for data extraction and data analyses. Trend analysis indicated shifts in late-season long-rainy season precipitation into dry in the southwest and south-central portions of Ethiopia. Droughts during the dry season (October–January) were largely temperature controlled. Short-term temperature-controlled hydrologic processes exacerbated rainfall deficits during the short rainy season (February–May) and highlight the importance of temperature- and hydrology-induced soil dryness on the production of short-season crops such as tef. Droughts during the long-rainy season (June–September) were largely driven by precipitation declines arising from the narrowing of the intertropical convergence zone (ITCZ). Increased dryness during long-rainy season had severe consequences on the production of corn and sorghum. PDSI was an aggressive indicator of seasonal droughts suggesting the low natural resilience to combat the effects of slow-acting, moisture-depleting hydrologic processes. The lack of irrigation systems in the nation limits the ability to combat droughts and improve agricultural resilience. There is an urgent need to monitor soil moisture (a key agro-hydrologic variable) to better quantify the impacts of meteorological droughts on agricultural systems in Ethiopia.

Keywords: autocorrelation, climate change, droughts, Ethiopia, food security, palmer z-index, PDSI, SPEI, SPI, trend analysis

Procedia PDF Downloads 113
94 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam

Authors: Mahtab Makaremi Masouleh, Günter Wozniak

Abstract:

This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.

Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam

Procedia PDF Downloads 363
93 Spinoza, Law and Gender Equality in Politics

Authors: Debora Caetano Dahas

Abstract:

In ‘Ethics’ and in ‘A Political Treatise’ Spinoza presents his very influential take on natural law and the principles that guide his philosophical work and observations. Spinoza’s ideas about rationalization, God, and ethical behavior are undeniably relevant to many debates in the field of legal theory. In addition, it is important to note that Spinoza's takes on body, mind, and imagination played an important role in building a certain way of understanding the female figure in western societies and of their differences in regards to the male figure. It is important to emphasize that the constant and insistent presentation of women as inferior and irrational beings corroborates the institutionalization of discriminatory public policies and practices legitimized by the legal system that cooperates with the aggravation of gender inequalities. Therefore, his arguments in relation to women and their nature have been highly criticized, especially by feminist theorists during the second half of the 21st century. The questioning of this traditional philosophy –often phallocentric– and its way of describing women as irrational and less capable than men, as well as the attempt to reformulate postulates and concepts, takes place in such a way as to create a deconstruction of classical concepts. Some of the arguments developed by Spinoza, however, can serve as a basis for elucidating in what way and to what extent the social and political construction of the feminine identity served as a basis for gender inequality. Thus, based on to the observations elaborated by Moira Gantes, the present research addresses the relationship between Spinoza and the feminist demands in the juridical and political spheres, elaborating arguments that corroborate the convergence between his philosophy and feminist critical theory. Finally, this research aims to discuss how the feminists' critics of Spinoza’s writings have deconstructed and rehabilitated his principles and, in doing so, can further help to illustrate the importance of his philosophy –and, consequently, of his notes on Natural Law– in understanding gender equality as a vital part of the effective implementation of democratic debate and inclusive political participation and representation. In doing so, philosophical and legal arguments based on the feminist re-reading of Spinoza’s principles are presented and then used to explain the controversial political reform in Brazil, especially in regards to the applicability of the legislative act known as Law n. 9.504/1997 which establishes that at least 30% of legislative seats must be occupied by women.

Keywords: natural law, feminism, politics, gender equality

Procedia PDF Downloads 142
92 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 284
91 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 237
90 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids

Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho

Abstract:

In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.

Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model

Procedia PDF Downloads 93
89 Political Coercion from Within: Theoretical Convergence in the Strategies of Terrorist Groups, Insurgencies, and Social Movements

Authors: John Hardy

Abstract:

The early twenty-first century national security environment has been characterized by political coercion. Despite an abundance of political commentary on the various forms of non-state coercion leveraged against the state, there is a lack of literature which distinguishes between the mechanisms and the mediums of coercion. Frequently non-state movements seeking to coerce the state are labelled by their tactics, not their strategies. Terrorists, insurgencies and social movements are largely defined by the ways in which they seek to influence the state, rather than by their political aims. This study examines the strategies of coercion used by non-state actors against states. This approach includes terrorist groups, insurgencies, and social movements who seek to coerce state politics. Not all non-state actors seek political coercion, so not all examples of different group types are considered. This approach also excludes political coercion by states, focusing on the non-state actor as the primary unit of analysis. The study applies a general theory of political coercion, which is defined as attempts to change the policies or action of a polity against its will, to the strategies employed by terrorist groups, insurgencies, and social movements. This distinguishes non-state actors’ strategic objectives from their actions and motives, which are variables that are often used to differentiate between types of non-state actors and the labels commonly used to describe them. It also allows for a comparative analysis of theoretical perspectives from the disciplines of terrorism, insurgency and counterinsurgency, and social movements. The study finds that there is a significant degree of overlap in the way that different disciplines conceptualize the mechanism of political coercion by non-state actors. Studies of terrorism and counterterrorism focus more on the notions of cost tolerance and collective punishment, while studies of insurgency focus on a contest of legitimacy between actors, and social movement theory tend to link political objectives, social capital, and a mechanism of influence to leverage against the state. Each discipline has a particular vernacular for the mechanism of coercion, which is often linked to the means of coercion, but they converge on three core theoretical components of compelling a polity to change its policies or actions: exceeding resistance to change, using political or violent punishments, and withholding legitimacy or consent from a government.

Keywords: counter terrorism, homeland security, insurgency, political coercion, social movement theory, terrorism

Procedia PDF Downloads 154
88 Petrogenesis and Tectonic Implication of the Oligocene Na-Rich Granites from the North Sulawesi Arc, Indonesia

Authors: Xianghong Lu, Yuejun Wang, Chengshi Gan, Xin Qian

Abstract:

The North Sulawesi Arc, located on the east of Indonesia and to the south of the Celebes Sea, is the north part of the K-shape of Sulawesi Island and has a complex tectonic history since the Cenozoic due to the convergence of three plates (Eurasia, India-Australia and Pacific plates). Published rock records contain less precise chronology, mostly using K-Ar dating, and rare geochemistry data, which limit the understanding of the regional tectonic setting. This study presents detailed zircon U-Pb geochronological and Hf-O isotope and whole-rock geochemical analyses for the Na-rich granites from the North Sulawesi Arc. Zircon U-Pb geochronological analyses of three representative samples yield weighted mean ages of 30.4 ± 0.4 Ma, 29.5 ± 0.2 Ma, and 27.3 ± 0.4 Ma, respectively, revealing the Oligocene magmatism in the North Sulawesi Arc. The samples have high Na₂O and low K₂O contents with high Na₂O/K₂O ratios, belonging to Low-K tholeiitic Na-rich granites. The Na-rich granites are characterized by high SiO₂ contents (75.05-79.38 wt.%) and low MgO contents (0.07-0.91 wt.%) and show arc-like trace elemental signatures. They have low (⁸⁷Sr/⁸⁶Sr)i ratios (0.7044-0.7046), high εNd(t) values (from +5.1 to +6.6), high zircon εHf(t) values (from +10.1 to +18.8) and low zircon δ18O values (3.65-5.02). They show an Indian-Ocean affinity of Pb isotopic compositions with ²⁰⁶Pb/²⁰⁴Pb ratio of 18.16-18.37, ²⁰⁷Pb/²⁰⁴Pb ratio of 15.56-15.62, and ²⁰⁸Pb/²⁰⁴Pb ratio of 38.20-38.66. These geochemical signatures suggest that the Oligocene Na-rich granites from the North Sulawesi Arc formed by partial melting of the juvenile oceanic crust with sediment-derived fluid-related metasomatism in a subducting setting and support an intra-oceanic arc origin. Combined with the published study, the emergence of extensive calc-alkaline felsic arc magmatism can be traced back to the Early Oligocene period, subsequent to the Eocene back-arc basalts (BAB) that share similarity with the Celebes Sea basement. Since the opening of the Celebes Sea started from the Eocene (42~47 Ma) and stopped by the Early Oligocene (~32 Ma), the geodynamical mechanism of the formation of the Na-rich granites from the North Sulawesi Arc during the Oligocene might relate to the subduction of the Indian Ocean.

Keywords: North Sulawesi Arc, oligocene, Na-rich granites, in-situ zircon Hf–O analysis, intra-oceanic origin

Procedia PDF Downloads 51