Search results for: λ-statistical convergence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 603

Search results for: λ-statistical convergence

93 Political Coercion from Within: Theoretical Convergence in the Strategies of Terrorist Groups, Insurgencies, and Social Movements

Authors: John Hardy

Abstract:

The early twenty-first century national security environment has been characterized by political coercion. Despite an abundance of political commentary on the various forms of non-state coercion leveraged against the state, there is a lack of literature which distinguishes between the mechanisms and the mediums of coercion. Frequently non-state movements seeking to coerce the state are labelled by their tactics, not their strategies. Terrorists, insurgencies and social movements are largely defined by the ways in which they seek to influence the state, rather than by their political aims. This study examines the strategies of coercion used by non-state actors against states. This approach includes terrorist groups, insurgencies, and social movements who seek to coerce state politics. Not all non-state actors seek political coercion, so not all examples of different group types are considered. This approach also excludes political coercion by states, focusing on the non-state actor as the primary unit of analysis. The study applies a general theory of political coercion, which is defined as attempts to change the policies or action of a polity against its will, to the strategies employed by terrorist groups, insurgencies, and social movements. This distinguishes non-state actors’ strategic objectives from their actions and motives, which are variables that are often used to differentiate between types of non-state actors and the labels commonly used to describe them. It also allows for a comparative analysis of theoretical perspectives from the disciplines of terrorism, insurgency and counterinsurgency, and social movements. The study finds that there is a significant degree of overlap in the way that different disciplines conceptualize the mechanism of political coercion by non-state actors. Studies of terrorism and counterterrorism focus more on the notions of cost tolerance and collective punishment, while studies of insurgency focus on a contest of legitimacy between actors, and social movement theory tend to link political objectives, social capital, and a mechanism of influence to leverage against the state. Each discipline has a particular vernacular for the mechanism of coercion, which is often linked to the means of coercion, but they converge on three core theoretical components of compelling a polity to change its policies or actions: exceeding resistance to change, using political or violent punishments, and withholding legitimacy or consent from a government.

Keywords: counter terrorism, homeland security, insurgency, political coercion, social movement theory, terrorism

Procedia PDF Downloads 174
92 Petrogenesis and Tectonic Implication of the Oligocene Na-Rich Granites from the North Sulawesi Arc, Indonesia

Authors: Xianghong Lu, Yuejun Wang, Chengshi Gan, Xin Qian

Abstract:

The North Sulawesi Arc, located on the east of Indonesia and to the south of the Celebes Sea, is the north part of the K-shape of Sulawesi Island and has a complex tectonic history since the Cenozoic due to the convergence of three plates (Eurasia, India-Australia and Pacific plates). Published rock records contain less precise chronology, mostly using K-Ar dating, and rare geochemistry data, which limit the understanding of the regional tectonic setting. This study presents detailed zircon U-Pb geochronological and Hf-O isotope and whole-rock geochemical analyses for the Na-rich granites from the North Sulawesi Arc. Zircon U-Pb geochronological analyses of three representative samples yield weighted mean ages of 30.4 ± 0.4 Ma, 29.5 ± 0.2 Ma, and 27.3 ± 0.4 Ma, respectively, revealing the Oligocene magmatism in the North Sulawesi Arc. The samples have high Na₂O and low K₂O contents with high Na₂O/K₂O ratios, belonging to Low-K tholeiitic Na-rich granites. The Na-rich granites are characterized by high SiO₂ contents (75.05-79.38 wt.%) and low MgO contents (0.07-0.91 wt.%) and show arc-like trace elemental signatures. They have low (⁸⁷Sr/⁸⁶Sr)i ratios (0.7044-0.7046), high εNd(t) values (from +5.1 to +6.6), high zircon εHf(t) values (from +10.1 to +18.8) and low zircon δ18O values (3.65-5.02). They show an Indian-Ocean affinity of Pb isotopic compositions with ²⁰⁶Pb/²⁰⁴Pb ratio of 18.16-18.37, ²⁰⁷Pb/²⁰⁴Pb ratio of 15.56-15.62, and ²⁰⁸Pb/²⁰⁴Pb ratio of 38.20-38.66. These geochemical signatures suggest that the Oligocene Na-rich granites from the North Sulawesi Arc formed by partial melting of the juvenile oceanic crust with sediment-derived fluid-related metasomatism in a subducting setting and support an intra-oceanic arc origin. Combined with the published study, the emergence of extensive calc-alkaline felsic arc magmatism can be traced back to the Early Oligocene period, subsequent to the Eocene back-arc basalts (BAB) that share similarity with the Celebes Sea basement. Since the opening of the Celebes Sea started from the Eocene (42~47 Ma) and stopped by the Early Oligocene (~32 Ma), the geodynamical mechanism of the formation of the Na-rich granites from the North Sulawesi Arc during the Oligocene might relate to the subduction of the Indian Ocean.

Keywords: North Sulawesi Arc, oligocene, Na-rich granites, in-situ zircon Hf–O analysis, intra-oceanic origin

Procedia PDF Downloads 76
91 Rethinking Confucianism and Democracy

Authors: He Li

Abstract:

Around the mid-1980s, Confucianism was reintroduced into China from Taiwan and Hong Kong as a result of China’s policies of reform and openness. Since then, the revival of neo-Confucianism in mainland China has accelerated and become a crucial component of the public intellectual sphere. The term xinrujia or xinruxue, loosely translated as “neo-Confucianism,” is increasingly understood as an intellectual and cultural phenomenon of the last four decades. The Confucian scholarship is in the process of restoration. This paper examines the Chinese intellectual discourse on Confucianism and democracy and places it in comparative and theoretical perspectives. With China’s rise and surge of populism in the West, particularly in the US, the leading political values of Confucianism could increasingly shape both China and the world at large. This state of affairs points to the need for more systematic efforts to assess the discourse on neo-Confucianism and its implications for China’s transformation. A number of scholars in the camp of neo-Confucianism maintain that some elements of Confucianism are not only compatible with democratic values and institutions but actually promote liberal democracy. They refer to it as Confucian democracy. By contrast, others either view Confucianism as a roadblock to democracy or envision that a convergence of democracy with Confucian values could result in a new hybrid system. The paper traces the complex interplay between Confucianism and democracy. It explores ideological differences between neo-Confucianism and liberal democracy and ascertains whether certain features of neo-Confucianism possess an affinity for the authoritarian political system. In addition to printed materials such as books and journal articles, a selection of articles from the website entitled Confucianism in China will be analyzed. The selection of this website is due to the fact that it is the leading website run by Chinese scholars focusing on neo-Confucianism. Another reason for selecting this website is its accessibility and availability. In the past few years, quite a few websites, left or right, were shut down by the authorities, but this website remains open. This paper explores the core components, dynamics, and implications of neo-Confucianism. My paper is divided into three parts. The first one discusses the origins of neo-Confucianism. The second section reviews the intellectual discourse among Chinese scholars on Confucian democracy. The third one explores the implications of the Chinese intellectual discourse on neo-Confucianism. Recently, liberal democracy has entered more conflict with official ideology. This paper, which is based on my extensive interviews in China prior to the pandemic and analysis of the primary sources in Chinese, will lay the foundation for a chapter on neo-Confucianism and democracy in my next book-length manuscript, tentatively entitled Chinese Intellectual Discourse on Democracy.

Keywords: China, confucius, confucianism, neo-confucianism, democracy

Procedia PDF Downloads 81
90 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 167
89 Guided Energy Theory of a Particle: Answered Questions Arise from Quantum Foundation

Authors: Desmond Agbolade Ademola

Abstract:

This work aimed to introduce a theory, called Guided Energy Theory of a particle that answered questions that arise from quantum foundation, quantum mechanics theory, and interpretation such as: what is nature of wavefunction? Is mathematical formalism of wavefunction correct? Does wavefunction collapse during measurement? Do quantum physical entanglement and many world interpretations really exist? In addition, is there uncertainty in the physical reality of our nature as being concluded in the Quantum theory? We have been able to show by the fundamental analysis presented in this work that the way quantum mechanics theory, and interpretation describes nature is not correlated with physical reality. Because, we discovered amongst others that, (1) Guided energy theory of a particle fundamentally provides complete physical observable series of quantized measurement of a particle momentum, force, energy e.t.c. in a given distance and time.In contrast, quantum mechanics wavefunction describes that nature has inherited probabilistic and indeterministic physical quantities, resulting in unobservable physical quantities that lead to many worldinterpretation.(2) Guided energy theory of a particle fundamentally predicts that it is mathematically possible to determine precise quantized measurementof position and momentum of a particle simultaneously. Because, there is no uncertainty in nature; nature however naturally guides itself against uncertainty. Contrary to the conclusion in quantum mechanics theory that, it is mathematically impossible to determine the position and the momentum of a particle simultaneously. Furthermore, we have been able to show by this theory that, it is mathematically possible to determine quantized measurement of force acting on a particle simultaneously, which is not possible on the premise of quantum mechanics theory. (3) It is evidently shown by our theory that, guided energy does not collapse, only describes the lopsided nature of a particle behavior in motion. This pretty offers us insight on gradual process of engagement - convergence and disengagement – divergence of guided energy holders which further highlight the picture how wave – like behavior return to particle-like behavior and how particle – like behavior return to wave – like behavior respectively. This further proves that the particles’ behavior in motion is oscillatory in nature. The mathematical formalism of Guided energy theory shows that nature is certainty whereas the mathematical formalism of Quantum mechanics theory shows that nature is absolutely probabilistics. In addition, the nature of wavefunction is the guided energy of the wave. In conclusion, the fundamental mathematical formalism of Quantum mechanics theory is wrong.

Keywords: momentum, physical entanglement, wavefunction, uncertainty

Procedia PDF Downloads 295
88 Consolidating a Regime of State Terror: A Historical Analysis of Necropolitics and the Evolution of Policing Practices in California as a Former Colony, Frontier, and Late-Modern Settler Society

Authors: Peyton M. Provenzano

Abstract:

This paper draws primarily upon the framework of necropolitics and presents California as itself a former frontier, colony, and late-modern settler society. The convergence of these successive and overlapping regimes of state terror is actualized and traceable through an analysis of historical and contemporary police practices. At the behest of the Spanish Crown and with the assistance of the Spanish military, the Catholic Church led the original expedition to colonize California. The indigenous populations of California were subjected to brutal practices of confinement and enslavement at the missions. After the annex of California by the United States, the western-most territory became an infamous frontier where new settlers established vigilante militias to enact violence against indigenous populations to protect their newly stolen land. Early mining settlements sought to legitimize and fund vigilante violence by wielding the authority of rudimentary democratic structures. White settlers circulated petitions for funding to establish a volunteer company under California’s Militia Law for ‘protection’ against the local indigenous populations. The expansive carceral practices of Los Angelinos at the turn of the 19th century exemplify the way in which California solidified its regime of exclusion as a white settler society. Drawing on recent scholarship that queers the notion of biopower and names police as street-level sovereigns, the police murder of Kayla Moore is understood as the latest manifestation of a carceral regime of exclusion and genocide. Kayla Moore was an African American transgender woman living with a mental health disability that was murdered by Berkeley police responding to a mental health crisis call in 2013. The intersectionality of Kayla’s identity made her hyper-vulnerable to state-sanctioned violence. Kayla was a victim not only of the explicitly racial biopower of police, nor the regulatory state power of necropolitics but of the ‘asphyxia’ that was intended to invisibilize both her life and her murder.

Keywords: asphyxia, biopower, california, carceral state, genocide, necropolitics, police, police violence

Procedia PDF Downloads 137
87 Hidro-IA: An Artificial Intelligent Tool Applied to Optimize the Operation Planning of Hydrothermal Systems with Historical Streamflow

Authors: Thiago Ribeiro de Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite

Abstract:

The area of the electricity sector that deals with energy needs by the hydroelectric in a coordinated manner is called Operation Planning of Hydrothermal Power Systems (OPHPS). The purpose of this is to find a political operative to provide electrical power to the system in a given period, with reliability and minimal cost. Therefore, it is necessary to determine an optimal schedule of generation for each hydroelectric, each range, so that the system meets the demand reliably, avoiding rationing in years of severe drought, and that minimizes the expected cost of operation during the planning, defining an appropriate strategy for thermal complementation. Several optimization algorithms specifically applied to this problem have been developed and are used. Although providing solutions to various problems encountered, these algorithms have some weaknesses, difficulties in convergence, simplification of the original formulation of the problem, or owing to the complexity of the objective function. An alternative to these challenges is the development of techniques for simulation optimization and more sophisticated and reliable, it can assist the planning of the operation. Thus, this paper presents the development of a computational tool, namely Hydro-IA for solving optimization problem identified and to provide the User an easy handling. Adopted as intelligent optimization technique is Genetic Algorithm (GA) and programming language is Java. First made the modeling of the chromosomes, then implemented the function assessment of the problem and the operators involved, and finally the drafting of the graphical interfaces for access to the User. The results with the Genetic Algorithms were compared with the optimization technique nonlinear programming (NLP). Tests were conducted with seven hydroelectric plants interconnected hydraulically with historical stream flow from 1953 to 1955. The results of comparison between the GA and NLP techniques shows that the cost of operating the GA becomes increasingly smaller than the NLP when the number of hydroelectric plants interconnected increases. The program has managed to relate a coherent performance in problem resolution without the need for simplification of the calculations together with the ease of manipulating the parameters of simulation and visualization of output results.

Keywords: energy, optimization, hydrothermal power systems, artificial intelligence and genetic algorithms

Procedia PDF Downloads 420
86 Seismotectonic Deformations along Strike-Slip Fault Systems of the Maghreb Region, Western Mediterranean

Authors: Abdelkader Soumaya, Noureddine Ben Ayed, Mojtaba Rajabi, Mustapha Meghraoui, Damien Delvaux, Ali Kadri, Moritz Ziegler, Said Maouche, Ahmed Braham, Aymen Arfaoui

Abstract:

The northern Maghreb region (Western Mediterranean) is a key area to study the seismotectonic deformations across the Africa-Eurasia convergent plate boundary. On the basis of young geologic fault slip data and stress inversion of focal mechanisms, we defined a first-order transpression-compatible stress field and a second-order spatial variation of tectonic regime across the Maghreb region, with a relatively stable SHmax orientation from east to west. Therefore, the present-day active contraction of the western Africa-Eurasia plate boundary is accommodated by (1) E-W strike-slip faulting with a reverse component along the Eastern Tell and Saharan-Tunisian Atlas, (2) a predominantly NE trending thrust faulting with strike-slip component in the Western Tell part, and (3) a conjugate strike-slip faulting regime with a normal component in the Alboran/Rif domain. This spatial variation of the active stress field and the tectonic regime is relatively in agreement with the inferred stress information from neotectonic features. According to newly suggested structural models, we highlight the role of main geometrically complex shear zones in the present-day stress pattern of the Maghreb region. Then, different geometries of these major preexisting strike-slip faults and related fractures (V-shaped conjugate fractures, horsetail splays faults, and Riedel fractures) impose their component on the second- and third-order stress regimes. Smoothed present-day and Neotectonic stress maps (mean SHmax orientation) reveal that plate boundary forces acting on the Africa-Eurasia collisional plates control the long wavelength of the stress field pattern in the Maghreb. The seismotectonic deformations and the upper crustal stress field in the study area are governed by the interplay of the oblique plate convergence (i.e., Africa-Eurasia), lithosphere-mantle interaction, and preexisting tectonic weakness zones.

Keywords: Maghreb, strike-slip fault, seismotectonic, focal mechanism, inversion

Procedia PDF Downloads 122
85 Tunnel Convergence Monitoring by Distributed Fiber Optics Embedded into Concrete

Authors: R. Farhoud, G. Hermand, S. Delepine-lesoille

Abstract:

Future underground facility of French radioactive waste disposal, named Cigeo, is designed to store intermediate and high level - long-lived French radioactive waste. Intermediate level waste cells are tunnel-like, about 400m length and 65 m² section, equipped with several concrete layers, which can be grouted in situ or composed of tunnel elements pre-grouted. The operating space into cells, to allow putting or removing waste containers, should be monitored for several decades without any maintenance. To provide the required information, design was performed and tested in situ in Andra’s underground laboratory (URL) at 500m under the surface. Based on distributed optic fiber sensors (OFS) and backscattered Brillouin for strain and Raman for temperature interrogation technics, the design consists of 2 loops of OFS, at 2 different radiuses, around the monitored section (Orthoradiale strains) and longitudinally. Strains measured by distributed OFS cables were compared to classical vibrating wire extensometers (VWE) and platinum probes (Pt). The OFS cables were composed of 2 cables sensitive to strains and temperatures and one only for temperatures. All cables were connected, between sensitive part and instruments, to hybrid cables to reduce cost. The connection has been made according to 2 technics: splicing fibers in situ after installation or preparing each fiber with a connector and only plugging them together in situ. Another challenge was installing OFS cables along a tunnel mad in several parts, without interruption along several parts. First success consists of the survival rate of sensors after installation and quality of measurements. Indeed, 100% of OFS cables, intended for long-term monitoring, survived installation. Few new configurations were tested with relative success. Measurements obtained were very promising. Indeed, after 3 years of data, no difference was observed between cables and connection methods of OFS and strains fit well with VWE and Pt placed at the same location. Data, from Brillouin instrument sensitive to strains and temperatures, were compensated with data provided by Raman instrument only sensitive to temperature and into a separated fiber. These results provide confidence in the next steps of the qualification processes which consists of testing several data treatment approach for direct analyses.

Keywords: monitoring, fiber optic, sensor, data treatment

Procedia PDF Downloads 129
84 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 256
83 Cr (VI) Adsorption on Ce0.25Zr0.75O2.nH2O-Kinetics and Thermodynamics

Authors: Carlos Alberto Rivera-corredor, Angie Dayana Vargas-Ceballos, Edison Gilpavas, Izabela Dobrosz-Gómez, Miguel Ángel Gómez-García

Abstract:

Hexavalent chromium, Cr (VI) is present in the effluents from different industries such as electroplating, mining, leather tanning, etc. This compound is of great academic and industrial concern because of its toxic and carcinogenic behavior. Its dumping to both environmental and public health for animals and humans causes serious problems in water sources. The amount of Cr (VI) in industrial wastewaters ranges from 0.5 to 270,000 mgL-1. According to the Colombian standard for water quality (NTC-813-2010), the maximum allowed concentration for the Cr (VI) in drinking water is 0.05 mg L-1. To comply with this limit, it is essential that industries treat their effluent to reduce the Cr (VI) to acceptable levels. Numerous methods have been reported for the treatment removing metal ions from aqueous solutions such as: reduction, ion exchange, electrodialysis, etc. Adsorption has become a promising method for the purification of metal ions in water, since its application corresponds with an economic and efficient technology. The absorbent selection and the kinetic and thermodynamic study of the adsorption conditions are key to the development of a suitable adsorption technology. The Ce0.25Zr0.75O2.nH2O presents higher adsorption capacity between a series of hydrated mixed oxides Ce1-xZrxO2 (x = 0, 0.25, 0.5, 0.75, 1). This work presents the kinetic and thermodynamic study of Cr (VI) adsorption on Ce0.25Zr0.75O2.nH2O. Experiments were performed under the following experimental conditions: initial Cr (VI) concentration = 25, 50 and 100 mgL-1, pH = 2, adsorbent charge = 4 gL-1, stirring time = 60 min, temperature=20, 28 and 40 °C. The Cr (VI) concentration was spectrophotometrically estimated by the method of difenilcarbazide with monitoring the absorbance at 540 nm. The Cr (VI) adsorption over hydrated Ce0.25Zr0.75O2.nH2O models was analyzed using pseudo-first and pseudo-second order kinetics. The Langmuir and Freundlich models were used to model the experimental data. The convergence between the experimental values and those predicted by the model, is expressed as a linear regression correlation coefficient (R2) and was employed as the model selection criterion. The adsorption process followed the pseudo-second order kinetic model and obeyed the Langmuir isotherm model. The thermodynamic parameters were calculated as: ΔH°=9.04 kJmol-1,ΔS°=0.03 kJmol-1 K-1, ΔG°=-0.35 kJmol-1 and indicated the endothermic and spontaneous nature of the adsorption process, governed by physisorption interactions.

Keywords: adsorption, hexavalent chromium, kinetics, thermodynamics

Procedia PDF Downloads 300
82 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory

Authors: Liqin Zhang, Liang Yan

Abstract:

This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.

Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization

Procedia PDF Downloads 125
81 Relationship of Macro-Concepts in Educational Technologies

Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez

Abstract:

This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.

Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language

Procedia PDF Downloads 199
80 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions

Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini

Abstract:

This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.

Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing

Procedia PDF Downloads 146
79 Marketing and Pharmaceutical Analysis of Medical Cosmetics in Bulgaria and Japan

Authors: V. Petkova, V. Valchanova, D. Grekova, K. Andreevska, S. T. Geurguiev, V. Madgarov, D. Grekov

Abstract:

Introduction: Production, distribution and sale of cosmetics is a global industry, which played a key role in the European Union (EU), the US and Japan. A major participant EU whose market cosmetics is greater than in the US and 2 times greater than that in Japan. The output value of the cosmetics industry in the EU is estimated at about € 35 billion in 2001. Nearly 5 billion cosmetic products (number of packages) are sold annually in the EU, and the main markets are France, Germany, Italy, Spain and the UK. The aim of the study is legal and marketing analysis of cosmetic products dispensed in a pharmacy. Materials and methodology: Historical legislative analysis - the method is applied in the analysis of changes in the legislative regulation of the activities of cosmetic products in Japan and Bulgaria Comparative legislative analysis - the method is applied when comparing the legislative requirements for cosmetic products in the already mentioned countries. Both methods are applied to the following regulations: 1) Japanese Pharmaceuticals Affairs Law, Tokyo, Japan, Ministry of Health, Labour and Welfare; 2) Law on Medicinal Products for Human Use; effective from 3.01.2014. Results: The legislative framework for cosmetic products in Bulgaria and Japan is close and generally includes general guidelines: Definition of a medicinal product; Categorization of drugs (with differences in sub-categories); Pre-registration and marketing approval of the competent authorities; Compulsory compliance with gmp (unlike cosmetics); Regulatory focus on product quality, efficacy and safety; Obligations for labeling of such products; Created systems Pharmacovigilance and commitment of all parties - industry and health professionals; The main similarities in the regulation of products classified as cosmetics are in the following segments: Full producer responsibility for product safety; Surveillance of market regulatory authorities; No need for pre-registration or pre-marketing approval (a basic requirement for notification); Without restrictions on sales channels; GMP manuals for cosmetics; Regulatory focus on product safety (than over efficiency); General requirements in labeling: The main differences in the regulation of products classified as cosmetics are in the following segments: Details in the regulation of cosmetic products; Future convergence of regulatory frameworks can contribute to the removal of barriers to trade, to encourage innovation, while simultaneously ensuring a high level of protection of consumer safety.

Keywords: cosmetics, legislation, comparative analysis, Bulgaria, Japan

Procedia PDF Downloads 592
78 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 87
77 Laser-Dicing Modeling: Implementation of a High Accuracy Tool for Laser-Grooving and Cutting Application

Authors: Jeff Moussodji, Dominique Drouin

Abstract:

The highly complex technology requirements of today’s integrated circuits (ICs), lead to the increased use of several materials types such as metal structures, brittle and porous low-k materials which are used in both front end of line (FEOL) and back end of line (BEOL) process for wafer manufacturing. In order to singulate chip from wafer, a critical laser-grooving process, prior to blade dicing, is used to remove these layers of materials out of the dicing street. The combination of laser-grooving and blade dicing allows to reduce the potential risk of induced mechanical defects such micro-cracks, chipping, on the wafer top surface where circuitry is located. It seems, therefore, essential to have a fundamental understanding of the physics involving laser-dicing in order to maximize control of these critical process and reduce their undesirable effects on process efficiency, quality, and reliability. In this paper, the study was based on the convergence of two approaches, numerical and experimental studies which allowed us to investigate the interaction of a nanosecond pulsed laser and BEOL wafer materials. To evaluate this interaction, several laser grooved samples were compared with finite element modeling, in which three different aspects; phase change, thermo-mechanical and optic sensitive parameters were considered. The mathematical model makes it possible to highlight a groove profile (depth, width, etc.) of a single pulse or multi-pulses on BEOL wafer material. Moreover, the heat affected zone, and thermo-mechanical stress can be also predicted as a function of laser operating parameters (power, frequency, spot size, defocus, speed, etc.). After modeling validation and calibration, a satisfying correlation between experiment and modeling, results have been observed in terms of groove depth, width and heat affected zone. The study proposed in this work is a first step toward implementing a quick assessment tool for design and debug of multiple laser grooving conditions with limited experiments on hardware in industrial application. More correlations and validation tests are in progress and will be included in the full paper.

Keywords: laser-dicing, nano-second pulsed laser, wafer multi-stack, multiphysics modeling

Procedia PDF Downloads 209
76 CSoS-STRE: A Combat System-of-System Space-Time Resilience Enhancement Framework

Authors: Jiuyao Jiang, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge

Abstract:

Modern warfare has transitioned from the paradigm of isolated combat forces to system-to-system confrontations due to advancements in combat technologies and application concepts. A combat system-of-systems (CSoS) is a combat network composed of independently operating entities that interact with one another to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its significant practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) has been proposed, which enhances the resilience of CSoS by incorporating spatial features. Firstly, a multilayer spatial combat network model has been constructed, which incorporates an information layer depicting the interrelations among combat entities based on the OODA loop, along with a spatial layer that considers the spatial characteristics of equipment entities, thereby accurately reflecting the actual combat process. Secondly, building upon the combat network model, a spatiotemporal resilience optimization model is proposed, which reformulates the resilience optimization problem as a classical linear optimization model with spatial features. Furthermore, the model is extended from scenarios without obstacles to those with obstacles, thereby further emphasizing the importance of spatial characteristics. Thirdly, a resilience-oriented recovery optimization method based on improved non dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method not only considers spatial features but also provides the optimal travel path for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study. Simultaneously, under deliberate attack conditions based on degree centrality and maximum operational loop performance, the proposed CSoS-STRE method is compared with six baseline recovery strategies, which are based on performance, time, degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. The comparison demonstrates that CSoS-STRE achieves faster convergence and superior performance.

Keywords: space-time resilience enhancement, resilience optimization model, combat system-of-systems, recovery optimization method, no-obstacles and obstacles

Procedia PDF Downloads 15
75 Spatial Analysis of the Socio-Environmental Vulnerability in Medium-Sized Cities: Case Study of Municipality of Caraguatatuba SP-Brazil

Authors: Katia C. Bortoletto, Maria Isabel C. de Freitas, Rodrigo B. N. de Oliveira

Abstract:

The environmental vulnerability studies are essential for priority actions to the reduction of disasters risk. The aim of this study is to analyze the socio-environmental vulnerability obtained through a Census survey, followed by both a statistical analysis (PCA/SPSS/IBM) and a spatial analysis by GIS (ArcGis/ESRI), taking as a case study the Municipality of Caraguatatuba-SP, Brazil. In the municipal development plan analysis the emphasis was given to the Special Zone of Social Interest (ZEIS), the Urban Expansion Zone (ZEU) and the Environmental Protection Zone (ZPA). For the mapping of the social and environmental vulnerabilities of the study area the exposure of people (criticality) and of the place (support capacity) facing disaster risk were obtained from the 2010 Census from the Brazilian Institute of Geography and Statistics (IBGE). Considering the criticality, the variables of greater influence were related to literate persons responsible for the household and literate persons with 5 or more years of age; persons with 60 years or more of age and income of the person responsible for the household. In the Support Capacity analysis, the predominant influence was on the good household infrastructure in districts with low population density and also the presence of neighborhoods with little urban infrastructure and inadequate housing. The results of the comparative analysis show that the areas with high and very high vulnerability classes cover the classes of the ZEIS and the ZPA, whose zoning includes: Areas occupied by low-income population, presence of children and young people, irregular occupations and land suitable to urbanization but underutilized. The presence of zones of urban sprawl (ZEU) in areas of high to very high socio-environmental vulnerability reflects the inadequate use of the urban land in relation to the spatial distribution of the population and the territorial infrastructure, which favors the increase of disaster risk. It can be concluded that the study allowed observing the convergence between the vulnerability analysis and the classified areas in urban zoning. The occupation of areas unsuitable for housing due to its characteristics of risk was confirmed, thus concluding that the methodologies applied are agile instruments to subsidize actions to the reduction disasters risk.

Keywords: socio-environmental vulnerability, urban zoning, reduction disasters risk, methodologies

Procedia PDF Downloads 298
74 Life Time Improvement of Clamp Structural by Using Fatigue Analysis

Authors: Pisut Boonkaew, Jatuporn Thongsri

Abstract:

In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.

Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability

Procedia PDF Downloads 235
73 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives

Authors: Chen Guo, Heng Tang, Ben Niu

Abstract:

Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.

Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives

Procedia PDF Downloads 139
72 A Mixed Finite Element Formulation for Functionally Graded Micro-Beam Resting on Two-Parameter Elastic Foundation

Authors: Cagri Mollamahmutoglu, Aykut Levent, Ali Mercan

Abstract:

Micro-beams are one of the most common components of Nano-Electromechanical Systems (NEMS) and Micro Electromechanical Systems (MEMS). For this reason, static bending, buckling, and free vibration analysis of micro-beams have been the subject of many studies. In addition, micro-beams restrained with elastic type foundations have been of particular interest. In the analysis of microstructures, closed-form solutions are proposed when available, but most of the time solutions are based on numerical methods due to the complex nature of the resulting differential equations. Thus, a robust and efficient solution method has great importance. In this study, a mixed finite element formulation is obtained for a functionally graded Timoshenko micro-beam resting on two-parameter elastic foundation. In the formulation modified couple stress theory is utilized for the micro-scale effects. The equation of motion and boundary conditions are derived according to Hamilton’s principle. A functional, derived through a scientific procedure based on Gateaux Differential, is proposed for the bending and buckling analysis which is equivalent to the governing equations and boundary conditions. Most important advantage of the formulation is that the mixed finite element formulation allows usage of C₀ type continuous shape functions. Thus shear-locking is avoided in a built-in manner. Also, element matrices are sparsely populated and can be easily calculated with closed-form integration. In this framework results concerning the effects of micro-scale length parameter, power-law parameter, aspect ratio and coefficients of partially or fully continuous elastic foundation over the static bending, buckling, and free vibration response of FG-micro-beam under various boundary conditions are presented and compared with existing literature. Performance characteristics of the presented formulation were evaluated concerning other numerical methods such as generalized differential quadrature method (GDQM). It is found that with less computational burden similar convergence characteristics were obtained. Moreover, formulation also includes a direct calculation of the micro-scale related contributions to the structural response as well.

Keywords: micro-beam, functionally graded materials, two-paramater elastic foundation, mixed finite element method

Procedia PDF Downloads 162
71 The Decision-Making Process of the Central Banks of Brazil and India in Regional Integration: A Comparative Analysis of MERCOSUR and SAARC (2003-2014)

Authors: Andre Sanches Siqueira Campos

Abstract:

Central banks can play a significant role in promoting regional economic and monetary integration by strengthening the payment and settlement systems. However, close coordination and cooperation require facilitating the implementation of reforms at domestic and cross-border levels in order to benchmark with international standards and commitments to the liberal order. This situation reflects the normative power of the regulatory globalization dimension of strong states, which may drive or constrain regional integration. In the MERCOSUR and SAARC regions, central banks have set financial initiatives that could facilitate South America and South Asia regions to move towards convergence integration and facilitate trade and investments connectivities. This is qualitative method research based on a combination of the Process-Tracing method with Qualitative Comparative Analysis (QCA). This research approaches multiple forms of data based on central banks, regional organisations, national governments, and financial institutions supported by existing literature. The aim of this research is to analyze the decision-making process of the Central Bank of Brazil (BCB) and the Reserve Bank of India (RBI) towards regional financial cooperation by identifying connectivity instruments that foster, gridlock, or redefine cooperation. The BCB and The RBI manage the monetary policy of the largest economies of those regions, which makes regional cooperation a relevant framework to understand how they provide an effective institutional arrangement for regional organisations to achieve some of their key policies and economic objectives. The preliminary conclusion is that both BCB and RBI demonstrate a reluctance to deepen regional cooperation because of the existing economic, political, and institutional asymmetries. Deepening regional cooperation is constrained by the interests of central banks in protecting their economies from risks of instability due to different degrees of development between countries in their regions and international financial crises that have impacted the international system in the 21st century. Reluctant regional integration also provides autonomy for national development and political ground for the contestation of Global Financial Governance by Brazil and India.

Keywords: Brazil, central banks, decision-making process, global financial governance, India, MERCOSUR, connectivity, payment system, regional cooperation, SAARC

Procedia PDF Downloads 114
70 Experimental and Numerical Investigation on the Torque in a Small Gap Taylor-Couette Flow with Smooth and Grooved Surface

Authors: L. Joseph, B. Farid, F. Ravelet

Abstract:

Fundamental studies were performed on bifurcation, instabilities and turbulence in Taylor-Couette flow and applied to many engineering applications like astrophysics models in the accretion disks, shrouded fans, and electric motors. Such rotating machinery performances need to have a better understanding of the fluid flow distribution to quantify the power losses and the heat transfer distribution. The present investigation is focused on high gap ratio of Taylor-Couette flow with high rotational speeds, for smooth and grooved surfaces. So far, few works has been done in a very narrow gap and with very high rotation rates and, to the best of our knowledge, not with this combination with grooved surface. We study numerically the turbulent flow between two coaxial cylinders where R1 and R2 are the inner and outer radii respectively, where only the inner is rotating. The gap between the rotor and the stator varies between 0.5 and 2 mm, which corresponds to a radius ratio η = R1/R2 between 0.96 and 0.99 and an aspect ratio Γ= L/d between 50 and 200, where L is the length of the rotor and d being the gap between the two cylinders. The scaling of the torque with the Reynolds number is determined at different gaps for different smooth and grooved surfaces (and also with different number of grooves). The fluid in the gap is air. Re varies between 8000 and 30000. Another dimensionless parameter that plays an important role in the distinction of the regime of the flow is the Taylor number that corresponds to the ratio between the centrifugal forces and the viscous forces (from 6.7 X 105 to 4.2 X 107). The torque will be first evaluated with RANS and U-RANS models, and compared to empirical models and experimental results. A mesh convergence study has been done for each rotor-stator combination. The results of the torque are compared to different meshes in 2D dimensions. For the smooth surfaces, the models used overestimate the torque compared to the empirical equations that exist in the bibliography. The closest models to the empirical models are those solving the equations near to the wall. The greatest torque achieved with grooved surface. The tangential velocity in the gap was always higher in between the rotor and the stator and not on the wall of rotor. Also the greater one was in the groove in the recirculation zones. In order to avoid endwall effects, long cylinders are used in our setup (100 mm), torque is measured by a co-rotating torquemeter. The rotor is driven by an air turbine of an automotive turbo-compressor for high angular velocities. The results of the experimental measurements are at rotational speed of up to 50 000 rpm. The first experimental results are in agreement with numerical ones. Currently, quantitative study is performed on grooved surface, to determine the effect of number of grooves on the torque, experimentally and numerically.

Keywords: Taylor-Couette flow, high gap ratio, grooved surface, high speed

Procedia PDF Downloads 407
69 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 490
68 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 125
67 Analyzing Water Waves in Underground Pumped Storage Reservoirs: A Combined 3D Numerical and Experimental Approach

Authors: Elena Pummer, Holger Schuettrumpf

Abstract:

By today underground pumped storage plants as an outstanding alternative for classical pumped storage plants do not exist. They are needed to ensure the required balance between production and demand of energy. As a short to medium term storage pumped storage plants have been used economically over a long period of time, but their expansion is limited locally. The reasons are in particular the required topography and the extensive human land use. Through the use of underground reservoirs instead of surface lakes expansion options could be increased. Fulfilling the same functions, several hydrodynamic processes result in the specific design of the underground reservoirs and must be implemented in the planning process of such systems. A combined 3D numerical and experimental approach leads to currently unknown results about the occurring wave types and their behavior in dependence of different design and operating criteria. For the 3D numerical simulations, OpenFOAM was used and combined with an experimental approach in the laboratory of the Institute of Hydraulic Engineering and Water Resources Management at RWTH Aachen University, Germany. Using the finite-volume method and an explicit time discretization, a RANS-Simulation (k-ε) has been run. Convergence analyses for different time discretization, different meshes etc. and clear comparisons between both approaches lead to the result, that the numerical and experimental models can be combined and used as hybrid model. Undular bores partly with secondary waves and breaking bores occurred in the underground reservoir. Different water levels and discharges change the global effects, defined as the time-dependent average of the water level as well as the local processes, defined as the single, local hydrodynamic processes (water waves). Design criteria, like branches, directional changes, changes in cross-section or bottom slope, as well as changes in roughness have a great effect on the local processes, the global effects remain unaffected. Design calculations for underground pumped storage plants were developed on the basis of existing formulae and the results of the hybrid approach. Using the design calculations reservoirs heights as well as oscillation periods can be determined and lead to the knowledge of construction and operation possibilities of the plants. Consequently, future plants can be hydraulically optimized applying the design calculations on the local boundary conditions.

Keywords: energy storage, experimental approach, hybrid approach, undular and breaking Bores, 3D numerical approach

Procedia PDF Downloads 213
66 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 178
65 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 169
64 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.

Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent

Procedia PDF Downloads 178