Search results for: Legendre’s conjecture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 54

Search results for: Legendre’s conjecture

24 Evaluation of Quasi-Newton Strategy for Algorithmic Acceleration

Authors: T. Martini, J. M. Martínez

Abstract:

An algorithmic acceleration strategy based on quasi-Newton (or secant) methods is displayed for address the practical problem of accelerating the convergence of the Newton-Lagrange method in the case of convergence to critical multipliers. Since the Newton-Lagrange iteration converges locally at a linear rate, it is natural to conjecture that quasi-Newton methods based on the so called secant equation and some minimal variation principle, could converge superlinearly, thus restoring the convergence properties of Newton's method. This strategy can also be applied to accelerate the convergence of algorithms applied to fixed-points problems. Computational experience is reported illustrating the efficiency of this strategy to solve fixed-point problems with linear convergence rate.

Keywords: algorithmic acceleration, fixed-point problems, nonlinear programming, quasi-newton method

Procedia PDF Downloads 488
23 A Modified Decoupled Semi-Analytical Approach Based On SBFEM for Solving 2D Elastodynamic Problems

Authors: M. Fakharian, M. I. Khodakarami

Abstract:

In this paper, a new trend for improvement in semi-analytical method based on scale boundaries in order to solve the 2D elastodynamic problems is provided. In this regard, only the boundaries of the problem domain discretization are by specific sub-parametric elements. Mapping functions are uses as a class of higher-order Lagrange polynomials, special shape functions, Gauss-Lobatto -Legendre numerical integration, and the integral form of the weighted residual method, the matrix is diagonal coefficients in the equations of elastodynamic issues. Differences between study conducted and prior research in this paper is in geometry production procedure of the interpolation function and integration of the different is selected. Validity and accuracy of the present method are fully demonstrated through two benchmark problems which are successfully modeled using a few numbers of DOFs. The numerical results agree very well with the analytical solutions and the results from other numerical methods.

Keywords: 2D elastodynamic problems, lagrange polynomials, G-L-Lquadrature, decoupled SBFEM

Procedia PDF Downloads 444
22 Analytical Investigation of Viscous and Non-Viscous Fluid Particles in a Restricted Region Using Diffusion Magnetic Resonance Imaging Equation

Authors: Yusuf, S. I., Saba, A., Olaoye, D. O., Ibrahim J. A., Yahaya H. M., Jatto A. O

Abstract:

Nuclear Magnetic Resonance (NMR) technology has been applied in several ways to provide vital information about petro-physical properties of reservoirs. However, due to the need to study the molecular behaviours of particles of the fluids in different restricted media, diffusion magnetic resonance equation is hereby applied in spherical coordinates and solved analytically using the method of separation of variables and solution of Legendre equation by Frobenius method. The viscous fluid considered in this research work is unused oil while the non-viscous fluid is water. The results obtained show that water begins to manifest appreciable change at radial adjustment value of 10 and Magnetization of 2.31191995400015x1014 and relaxes finally at 2.30x1014 at radial adjustment value of 1. On the other hand, unused engine oil begins to manifest its changes at radial adjustment value of 40 and Magnetization of 1.466557018x1014and relaxes finally at 1.48x1014 at radial adjustment value of 5.

Keywords: viscous and non-viscous fluid, restricted medium, relaxation times, coefficient of diffusion

Procedia PDF Downloads 83
21 Individualized Emotion Recognition Through Dual-Representations and Ground-Established Ground Truth

Authors: Valentina Zhang

Abstract:

While facial expression is a complex and individualized behavior, all facial emotion recognition (FER) systems known to us rely on a single facial representation and are trained on universal data. We conjecture that: (i) different facial representations can provide different, sometimes complementing views of emotions; (ii) when employed collectively in a discussion group setting, they enable more accurate emotion reading which is highly desirable in autism care and other applications context sensitive to errors. In this paper, we first study FER using pixel-based DL vs semantics-based DL in the context of deepfake videos. Our experiment indicates that while the semantics-trained model performs better with articulated facial feature changes, the pixel-trained model outperforms on subtle or rare facial expressions. Armed with these findings, we have constructed an adaptive FER system learning from both types of models for dyadic or small interacting groups and further leveraging the synthesized group emotions as the ground truth for individualized FER training. Using a collection of group conversation videos, we demonstrate that FER accuracy and personalization can benefit from such an approach.

Keywords: neurodivergence care, facial emotion recognition, deep learning, ground truth for supervised learning

Procedia PDF Downloads 147
20 Total Chromatic Number of Δ-Claw-Free 3-Degenerated Graphs

Authors: Wongsakorn Charoenpanitseri

Abstract:

The total chromatic number χ"(G) of a graph G is the minimum number of colors needed to color the elements (vertices and edges) of G such that no incident or adjacent pair of elements receive the same color Let G be a graph with maximum degree Δ(G). Considering a total coloring of G and focusing on a vertex with maximum degree. A vertex with maximum degree needs a color and all Δ(G) edges incident to this vertex need more Δ(G) + 1 distinct colors. To color all vertices and all edges of G, it requires at least Δ(G) + 1 colors. That is, χ"(G) is at least Δ(G) + 1. However, no one can find a graph G with the total chromatic number which is greater than Δ(G) + 2. The Total Coloring Conjecture states that for every graph G, χ"(G) is at most Δ(G) + 2. In this paper, we prove that the Total Coloring Conjectur for a Δ-claw-free 3-degenerated graph. That is, we prove that the total chromatic number of every Δ-claw-free 3-degenerated graph is at most Δ(G) + 2.

Keywords: total colorings, the total chromatic number, 3-degenerated, CLAW-FREE

Procedia PDF Downloads 174
19 Climate Policy Actions for Sustaining International Agricultural Development Projects: The Role of Non-State, Sub-National Stakeholder Engagements, and Monitoring and Evaluation

Authors: EMMANUEL DWAMENA SASU

Abstract:

International climate policy actions require countries under Paris Agreement to design instruments, provide support (financial and technical), and strengthen institutional capacity with tendency to transcending policy formulation to implementation and sustainability. Changes associated with moisture depletion has been a growing phenomenon; especially in developing countries with projected global GDP drop from 7% to 2% between 2005 and 2050. These developments have potential to adversely affect food production in feeding the growing world population, with corresponding rise in global hunger. Incongruously, there is global absence of a harmonized policy direction; capable of providing the required indicators on climate policies for monitoring sustainability of international agricultural development projects. We conduct extensive review and synthesis on existing limitations on global climate policy governance, agricultural food security and sustainability of international agricultural development projects, and conjecture the role of non-state and sub-national climate stakeholder engagements, and monitoring and evaluation strategies for improved climate policy action for sustaining international agricultural development projects.

Keywords: climate policy, agriculture, development projects, sustainability

Procedia PDF Downloads 125
18 The Impacts of Local Decision Making on Customisation Process Speed across Distributed Boundaries

Authors: Abdulrahman M. Qahtani, Gary. B. Wills, Andy. M. Gravell

Abstract:

Communicating and managing customers’ requirements in software development projects play a vital role in the software development process. While it is difficult to do so locally, it is even more difficult to communicate these requirements over distributed boundaries and to convey them to multiple distribution customers. This paper discusses the communication of multiple distribution customers’ requirements in the context of customised software products. The main purpose is to understand the challenges of communicating and managing customisation requirements across distributed boundaries. We propose a model for Communicating Customisation Requirements of Multi-Clients in a Distributed Domain (CCRD). Thereafter, we evaluate that model by presenting the findings of a case study conducted with a company with customisation projects for 18 distributed customers. Then, we compare the outputs of the real case process and the outputs of the CCRD model using simulation methods. Our conjecture is that the CCRD model can reduce the challenge of communication requirements over distributed organisational boundaries, and the delay in decision making and in the entire customisation process time.

Keywords: customisation software products, global software engineering, local decision making, requirement engineering, simulation model

Procedia PDF Downloads 429
17 A Method to Compute Efficient 3D Helicopters Flight Trajectories Based On a Motion Polymorph-Primitives Algorithm

Authors: Konstanca Nikolajevic, Nicolas Belanger, David Duvivier, Rabie Ben Atitallah, Abdelhakim Artiba

Abstract:

Finding the optimal 3D path of an aerial vehicle under flight mechanics constraints is a major challenge, especially when the algorithm has to produce real-time results in flight. Kinematics models and Pythagorian Hodograph curves have been widely used in mobile robotics to solve this problematic. The level of difficulty is mainly driven by the number of constraints to be saturated at the same time while minimizing the total length of the path. In this paper, we suggest a pragmatic algorithm capable of saturating at the same time most of dimensioning helicopter 3D trajectories’ constraints like: curvature, curvature derivative, torsion, torsion derivative, climb angle, climb angle derivative, positions. The trajectories generation algorithm is able to generate versatile complex 3D motion primitives feasible by a helicopter with parameterization of the curvature and the climb angle. An upper ”motion primitives’ concatenation” algorithm is presented based. In this article we introduce a new way of designing three-dimensional trajectories based on what we call the ”Dubins gliding symmetry conjecture”. This extremely performing algorithm will be soon integrated to a real-time decisional system dealing with inflight safety issues.

Keywords: robotics, aerial robots, motion primitives, helicopter

Procedia PDF Downloads 615
16 Voting Behavior in an Era of Turbulent Race Relations: Revisiting Church Attendance and Turnout

Authors: JoVontae Butts

Abstract:

A central and enduring theme in the study of American politics is political participation, which indicates the health of a democracy, citizen buy-in, and fair political representation. Though voting push factors have been thoroughly researched and are becoming better understood, the effect of those same push factors often varies for marginalized people. Black voters begun to cast votes at a steadily increasing rate following the 1996 election, gradually growing to its highest level in the 2012 presidential election, even surpassing white voter participation rates. The thirty-year growth period of Black voter engagement concluded in the 2016 election, with the number of participating Black voters stumbling by approximately 7% while other demographics remained roughly the same. Theories for the shift in Black voter behavior range from vote suppression to discouragement due to Barack Obama’s concluding tenure in office. Furthermore, Black voter engagement rebounded in the 2020 election, leaving turnout and race scholars to speculate even further, predicting that disapproval of Trump energized the Black voter bloc. Though there is much conjecture regarding the changes in Black voter behavior, there is truly little empirical evidence to vet those suppositions. This study engages and quantifies speculations for the changes in Black voter engagement in recent elections using 2016 and 2020 American National Election Studies Pilot Study data. Additionally, this study expands upon McGregor’s theory of political hypervigilance by exploring differences in political engagement for church-attending Black voters and those that do not.

Keywords: race, religion, evangelicalism, political engagement

Procedia PDF Downloads 81
15 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 160
14 Convergence Results of Two-Dimensional Homogeneous Elastic Plates from Truncation of Potential Energy

Authors: Erick Pruchnicki, Nikhil Padhye

Abstract:

Plates are important engineering structures which have attracted extensive research since the 19th century. The subject of this work is statical analysis of a linearly elastic homogenous plate under small deformations. A 'thin plate' is a three-dimensional structure comprising of a small transverse dimension with respect to a flat mid-surface. The general aim of any plate theory is to deduce a two-dimensional model, in terms of mid-surface quantities, to approximately and accurately describe the plate's deformation in terms of mid-surface quantities. In recent decades, a common starting point for this purpose is to utilize series expansion of a displacement field across the thickness dimension in terms of the thickness parameter (h). These attempts are mathematically consistent in deriving leading-order plate theories based on certain a priori scaling between the thickness and the applied loads; for example, asymptotic methods which are aimed at generating leading-order two-dimensional variational problems by postulating formal asymptotic expansion of the displacement fields. Such methods rigorously generate a hierarchy of two-dimensional models depending on the order of magnitude of the applied load with respect to the plate-thickness. However, in practice, applied loads are external and thus not directly linked or dependent on the geometry/thickness of the plate; thus, rendering any such model (based on a priori scaling) of limited practical utility. In other words, the main limitation of these approaches is that they do not furnish a single plate model for all orders of applied loads. Following analogy of recent efforts of deploying Fourier-series expansion to study convergence of reduced models, we propose two-dimensional model(s) resulting from truncation of the potential energy and rigorously prove the convergence of these two-dimensional plate models to the parent three-dimensional linear elasticity with increasing truncation order of the potential energy.

Keywords: plate theory, Fourier-series expansion, convergence result, Legendre polynomials

Procedia PDF Downloads 110
13 Thomas Kuhn, the Accidental Theologian: An Argument for the Similarity of Science and Religion

Authors: Dominic McGann

Abstract:

Applying Kuhn’s model of paradigm shifts in science to cases of doctrinal change in religion has been a common area of study in recent years. Few authors, however, have sought an explanation for the ease with which this model of theory change in science can be applied to cases of religious change. In order to provide such an explanation of this analytic phenomenon, this paper aims to answer one central question: Why is it that a theory that was intended to be used in an analysis of the history of science can be applied to something as disparate as the doctrinal history of religion with little to no modification? By way of answering this question, this paper begins with an explanation of Kuhn’s model and its applications in the field of religious studies. Following this, Massa’s recently proposed explanation for this phenomenon, and its notable flaws will be explained by way of framing the central proposal of this article, that the operative parts of scientific and religious changes function on the same fundamental concept of changes in understanding. Focusing its argument on this key concept, this paper seeks to illustrate its operation in cases of religious conversion and in Kuhn’s notion of the incommensurability of different scientific paradigms. The conjecture of this paper is that just as a Pagan-turned-Christian ceases to hear Thor’s hammer when they hear a clap of thunder, so too does a Ptolemaic-turned-Copernican-astronomer cease to see the Sun orbiting the Earth when they view a sunrise. In both cases, the agent in question has undergone a similar change in universal understanding, which provides us with a fundamental connection between changes in religion and changes in science. Following an exploration of this connection, this paper will consider the implications that such a connection has for the concept of the division between religion and science. This will, in turn, lead to the conclusion that religion and science are more alike than they are opposed with regards to the fundamental notion of understanding, thereby providing an answer to our central question. The major finding of this paper is that Kuhn’s model can be applied to religious cases so easily because changes in science and changes in religion operate on the same type of change in understanding. Therefore, in summary, science and religion share a crucial similarity and are not as disparate as they first appear.

Keywords: Thomas Kuhn, science and religion, paradigm shifts, incommensurability, insight and understanding, philosophy of science, philosophy of religion

Procedia PDF Downloads 170
12 Uncertainty Quantification of Fuel Compositions on Premixed Bio-Syngas Combustion at High-Pressure

Authors: Kai Zhang, Xi Jiang

Abstract:

Effect of fuel variabilities on premixed combustion of bio-syngas mixtures is of great importance in bio-syngas utilisation. The uncertainties of concentrations of fuel constituents such as H2, CO and CH4 may lead to unpredictable combustion performances, combustion instabilities and hot spots which may deteriorate and damage the combustion hardware. Numerical modelling and simulations can assist in understanding the behaviour of bio-syngas combustion with pre-defined species concentrations, while the evaluation of variabilities of concentrations is expensive. To be more specific, questions such as ‘what is the burning velocity of bio-syngas at specific equivalence ratio?’ have been answered either experimentally or numerically, while questions such as ‘what is the likelihood of burning velocity when precise concentrations of bio-syngas compositions are unknown, but the concentration ranges are pre-described?’ have not yet been answered. Uncertainty quantification (UQ) methods can be used to tackle such questions and assess the effects of fuel compositions. An efficient probabilistic UQ method based on Polynomial Chaos Expansion (PCE) techniques is employed in this study. The method relies on representing random variables (combustion performances) with orthogonal polynomials such as Legendre or Gaussian polynomials. The constructed PCE via Galerkin Projection provides easy access to global sensitivities such as main, joint and total Sobol indices. In this study, impacts of fuel compositions on combustion (adiabatic flame temperature and laminar flame speed) of bio-syngas fuel mixtures are presented invoking this PCE technique at several equivalence ratios. High-pressure effects on bio-syngas combustion instability are obtained using detailed chemical mechanism - the San Diego Mechanism. Guidance on reducing combustion instability from upstream biomass gasification process is provided by quantifying the significant contributions of composition variations to variance of physicochemical properties of bio-syngas combustion. It was found that flame speed is very sensitive to hydrogen variability in bio-syngas, and reducing hydrogen uncertainty from upstream biomass gasification processes can greatly reduce bio-syngas combustion instability. Variation of methane concentration, although thought to be important, has limited impacts on laminar flame instabilities especially for lean combustion. Further studies on the UQ of percentage concentration of hydrogen in bio-syngas can be conducted to guide the safer use of bio-syngas.

Keywords: bio-syngas combustion, clean energy utilisation, fuel variability, PCE, targeted uncertainty reduction, uncertainty quantification

Procedia PDF Downloads 275
11 Assessing Empathy of Deliquent Adolescents

Authors: Stephens Oluyemi Adetunji, Nel Norma Margaret, Naidu Narainsamy

Abstract:

Empathy has been identified by researchers to be a crucial factor in helping adolescents to refrain from delinquent behavior. Adolescent delinquent behavior is a social problem that has become a source of concern to parents, psychologists, educators, correctional services, researchers as well as governments of nations. Empathy is a social skill that enables an individual to understand and to share another’s emotional state. An individual with a high level of empathy will avoid any act or behavior that will affect another person negatively. The need for this study is predicated on the fact that delinquent adolescent behavior could lead to adult criminality. This, in the long run, has the potential of resulting in an increase in crime rate thereby threatening public safety. It has therefore become imperative to explore the level of empathy of delinquent adolescents who have committed crime and are awaiting trial. It is the conjecture of this study that knowledge of the empathy level of delinquent adolescents will provide an opportunity to design an intervention strategy to remediate the deficit. This study was therefore designed to determine the level of empathy of delinquent adolescents. In addition, this study provides a better understanding of factors that may prevent adolescents from developing delinquent behavior, in this case, delinquents’ empathy levels. In the case of participants who have a low level of empathy, remediation strategies to improve their empathy level would be designed. Two research questions were raised to guide this study. A mixed methods research design was employed for the study. The sample consists of fifteen male adolescents who are between 13-18 years old with a mean age of 16.5 years old. The participants are adolescents who are awaiting trial. The non-probability sampling technique was used to obtain the sample for the quantitative study while purposive sampling was used in the case of the qualitative study. A self–report questionnaire and structured interview were used to assess the level of empathy of participants. The data obtained was analysed using the simple percentages for the quantitative data and transcribing the qualitative data. The result indicates that most of the participants have low level of empathy. It is also revealed that there is a difference in the empathy level on the basis of whether they are from parents living together and those whose parents are separated. Based on the findings of this study, it is recommended that the level of empathy of participants be improved through training and emphasizing the importance of stimulating family environment for children. It is also recommended that programs such as youth mentoring and youth sheltering be established by the government of South Africa to address the menace of delinquent adolescents.

Keywords: adolescents, behavior, delinquents, empathy

Procedia PDF Downloads 462
10 Altering Surface Properties of Magnetic Nanoparticles with Single-Step Surface Modification with Various Surface Active Agents

Authors: Krupali Mehta, Sandip Bhatt, Umesh Trivedi, Bhavesh Bharatiya, Mukesh Ranjan, Atindra D. Shukla

Abstract:

Owing to the dominating surface forces and large-scale surface interactions, the nano-scale particles face difficulties in getting suspended in various media. Magnetic nanoparticles of iron oxide offer a great deal of promise due to their ease of preparation, reasonable magnetic properties, low cost and environmental compatibility. We intend to modify the surface of magnetic Fe₂O₃ nanoparticles with selected surface modifying agents using simple and effective single-step chemical reactions in order to enhance dispersibility of magnetic nanoparticles in non-polar media. Magnetic particles were prepared by hydrolysis of Fe²⁺/Fe³⁺ chlorides and their subsequent oxidation in aqueous medium. The dried particles were then treated with Octadecyl quaternary ammonium silane (Terrasil™), stearic acid and gallic acid ester of stearyl alcohol in ethanol separately to yield S-2 to S-4 respectively. The untreated Fe₂O₃ was designated as S-1. The surface modified nanoparticles were then analysed with Dynamic Light Scattering (DLS), Fourier Transform Infrared spectroscopy (FTIR), X-Ray Diffraction (XRD), Thermogravimetric Gravimetric Analysis (TGA) and Scanning Electron Microscopy and Energy dispersive X-Ray analysis (SEM-EDAX). Characterization reveals the particle size averaging 20-50 nm with and without modification. However, the crystallite size in all cases remained ~7.0 nm with the diffractogram matching to Fe₂O₃ crystal structure. FT-IR suggested the presence of surfactants on nanoparticles’ surface, also confirmed by SEM-EDAX where mapping of elements proved their presence. TGA indicated the weight losses in S-2 to S-4 at 300°C onwards suggesting the presence of organic moiety. Hydrophobic character of modified surfaces was confirmed with contact angle analysis, all modified nanoparticles showed super hydrophobic behaviour with average contact angles ~129° for S-2, ~139.5° for S-3 and ~151° for S-4. This indicated that surface modified particles are super hydrophobic and they are easily dispersible in non-polar media. These modified particles could be ideal candidates to be suspended in oil-based fluids, polymer matrices, etc. We are pursuing elaborate suspension/sedimentation studies of these particles in various oils to establish this conjecture.

Keywords: iron nanoparticles, modification, hydrophobic, dispersion

Procedia PDF Downloads 141
9 Bridging the Gap and Widening the Divide

Authors: Lerato Dixon, Thorsten Chmura

Abstract:

This paper explores whether ethnic identity in Zimbabwe leads to discriminatory behaviour and the degree to which a norm-based intervention can shift this discriminatory behaviour. Social Identity Theory suggests that group identity can lead to favouritism towards the in-group and discriminatory behaviour towards the out-group. Agents yield higher utility from maintaining positive self-esteem by confirming with group behaviour. This paper focuses on the two majority ethnic groups in Zimbabwe – the Ndebele and Shona. Racial identities are synonymous with the language spoken. Zimbabwe’s history highlights how identity formation took place. As following independence, political parties became recognised as either Ndebele or Shona-speaking. It is against this backdrop that this study investigates the degree to which norm-based nudge can alter behaviour. This paper uses experimental methods to analyse discriminatory behaviour between two naturally occurring ethnic groups in Zimbabwe. In addition, we investigate if social norm-based interventions can shift discriminatory behaviour to understand if the divide between these two identity groups can be further divided or healed. Participants are randomly assigned into three groups to receive information regarding a social norm. We compare the effect of a proscriptive social norm-based intervention, stating what shouldn't be done and prescriptive social norms as interventions, stating what should be done. Specifically, participants are either shown the socially appropriate (Heal) norm, the socially inappropriateness (Divide) norm regarding interethnic marriages or no norm-based intervention. Following the random assignment into intervention groups, participants take part in the Trust Game. We conjecture that discrimination will shift in accordance with the prevailing social norm. Instead, we find evidence of interethnic discriminatory behaviour. We also find that trust increases when interacting with Ndebele, Shona and Zimbabwean participants following the Heal intervention. However, if the participant is Shona, the Heal intervention decreases trust toward in-groups and Zimbabwean co-players. On the other hand, if the participant is Shona, the Divide treatment significantly increases trust toward Ndebele participants. In summary, we find evidence that norm-based interventions significantly change behaviour. However, the prescriptive norm-based intervention (Heal) decreases trust toward the in-group, out-group and national identity group if the participant is Shona – therefore having an adverse effect. In contrast, the proscriptive Divide treatment increases trust if the participant is Shona towards Ndebele co-players. We conclude that norm-based interventions have a ‘rebound’ effect by altering behaviour in the opposite direction.

Keywords: discrimination, social identity, social norm-based intervention, zimbabwe

Procedia PDF Downloads 250
8 An Econometric Analysis of the Flat Tax Revolution

Authors: Wayne Tarrant, Ethan Petersen

Abstract:

The concept of a flat tax goes back to at least the Biblical tithe. A progressive income tax was first vociferously espoused in a small, but famous, pamphlet in 1848 (although England had an emergency progressive tax for war costs prior to this). Within a few years many countries had adopted the progressive structure. The flat tax was only reinstated in some small countries and British protectorates until Mart Laar was elected Prime Minister of Estonia in 1992. Since Estonia’s adoption of the flat tax in 1993, many other formerly Communist countries have likewise abandoned progressive income taxes. Economists had expectations of what would happen when a flat tax was enacted, but very little work has been done on actually measuring the effect. With a testbed of 21 countries in this region that currently have a flat tax, much comparison is possible. Several countries have retained progressive taxes, giving an opportunity for contrast. There are also the cases of Czech Republic and Slovakia, which have adopted and later abandoned the flat tax. Further, with over 20 years’ worth of economic history in some flat tax countries, we can begin to do some serious longitudinal study. In this paper we consider many economic variables to determine if there are statistically significant differences from before to after the adoption of a flat tax. We consider unemployment rates, tax receipts, GDP growth, Gini coefficients, and market data where the data are available. Comparisons are made through the use of event studies and time series methods. The results are mixed, but we draw statistically significant conclusions about some effects. We also look at the different implementations of the flat tax. In some countries there are equal income and corporate tax rates. In others the income tax has a lower rate, while in others the reverse is true. Each of these sends a clear message to individuals and corporations. The policy makers surely have a desired effect in mind. We group countries with similar policies, try to determine if the intended effect actually occurred, and then report the results. This is a work in progress, and we welcome the suggestion of variables to consider. Further, some of the data from before the fall of the Iron Curtain are suspect. Since there are new ruling regimes in these countries, the methods of computing different statistical measures has changed. Although we first look at the raw data as reported, we also attempt to account for these changes. We show which data seem to be fictional and suggest ways to infer the needed statistics from other data. These results are reported beside those on the reported data. Since there is debate about taxation structure, this paper can help inform policymakers of change the flat tax has caused in other countries. The work shows some strengths and weaknesses of a flat tax structure. Moreover, it provides beginnings of a scientific analysis of the flat tax in practice rather than having discussion based solely upon theory and conjecture.

Keywords: flat tax, financial markets, GDP, unemployment rate, Gini coefficient

Procedia PDF Downloads 339
7 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 110
6 Interfacial Instability and Mixing Behavior between Two Liquid Layers Bounded in Finite Volumes

Authors: Lei Li, Ming M. Chai, Xiao X. Lu, Jia W. Wang

Abstract:

The mixing process of two liquid layers in a cylindrical container includes the upper liquid with higher density rushing into the lower liquid with lighter density, the lower liquid rising into the upper liquid, meanwhile the two liquid layers having interactions with each other, forming vortices, spreading or dispersing in others, entraining or mixing with others. It is a complex process constituted of flow instability, turbulent mixing and other multiscale physical phenomena and having a fast evolution velocity. In order to explore the mechanism of the process and make further investigations, some experiments about the interfacial instability and mixing behavior between two liquid layers bounded in different volumes are carried out, applying the planar laser induced fluorescence (PLIF) and the high speed camera (HSC) techniques. According to the results, the evolution of interfacial instability between immiscible liquid develops faster than theoretical rate given by the Rayleigh-Taylor Instability (RTI) theory. It is reasonable to conjecture that some mechanisms except the RTI play key roles in the mixture process of two liquid layers. From the results, it is shown that the invading velocity of the upper liquid into the lower liquid does not depend on the upper liquid's volume (height). Comparing to the cases that the upper and lower containers are of identical diameter, in the case that the lower liquid volume increases to larger geometric space, the upper liquid spreads and expands into the lower liquid more quickly during the evolution of interfacial instability, indicating that the container wall has important influence on the mixing process. In the experiments of miscible liquid layers’ mixing, the diffusion time and pattern of the liquid interfacial mixing also does not depend on the upper liquid's volumes, and when the lower liquid volume increases to larger geometric space, the action of the bounded wall on the liquid falling and rising flow will decrease, and the liquid interfacial mixing effects will also attenuate. Therefore, it is also concluded that the volume weight of upper heavier liquid is not the reason of the fast interfacial instability evolution between the two liquid layers and the bounded wall action is limited to the unstable and mixing flow. The numerical simulations of the immiscible liquid layers’ interfacial instability flow using the VOF method show the typical flow pattern agree with the experiments. However the calculated instability development is much slower than the experimental measurement. The numerical simulation of the miscible liquids’ mixing, which applying Fick’s diffusion law to the components’ transport equation, shows a much faster mixing rate than the experiments on the liquids’ interface at the initial stage. It can be presumed that the interfacial tension plays an important role in the interfacial instability between the two liquid layers bounded in finite volume.

Keywords: interfacial instability and mixing, two liquid layers, Planar Laser Induced Fluorescence (PLIF), High Speed Camera (HSC), interfacial energy and tension, Cahn-Hilliard Navier-Stokes (CHNS) equations

Procedia PDF Downloads 248
5 A Formal Microlectic Framework for Biological Circularchy

Authors: Ellis D. Cooper

Abstract:

“Circularchy” is supposed to be an adjustable formal framework with enough expressive power to articulate biological theory about Earthly Life in the sense of multi-scale biological autonomy constrained by non-equilibrium thermodynamics. “Formal framework” means specifically a multi-sorted first-order-theorywithequality (for each sort). Philosophically, such a theory is one kind of “microlect,” which means a “way of speaking” (or, more generally, a “way of behaving”) for overtly expressing a “mental model” of some “referent.” Other kinds of microlect include “natural microlect,” “diagrammatic microlect,” and “behavioral microlect,” with examples such as “political theory,” “Euclidean geometry,” and “dance choreography,” respectively. These are all describable in terms of a vocabulary conforming to grammar. As aspects of human culture, they are possibly reminiscent of Ernst Cassirer’s idea of “symbolic form;” as vocabularies, they are akin to Richard Rorty’s idea of “final vocabulary” for expressing a mental model of one’s life. A formal microlect is presented by stipulating sorts, variables, calculations, predicates, and postulates. Calculations (a.k.a., “terms”) may be composed to form more complicated calculations; predicates (a.k.a., “relations”) may be logically combined to form more complicated predicates; and statements (a.k.a., “sentences”) are grammatically correct expressions which are true or false. Conclusions are statements derived using logical rules of deduction from postulates, other assumed statements, or previously derived conclusions. A circularchy is a formal microlect constituted by two or more sub-microlects, each with its distinct stipulations of sorts, variables, calculations, predicates, and postulates. Within a sub-microlect some postulates or conclusions are equations which are statements that declare equality of specified calculations. An equational bond between an equation in one sub-microlect and an equation in either the same sub-microlect or in another sub-microlect is a predicate that declares equality of symbols occurring in a side of one equation with symbols occurring in a side of the other equation. Briefly, a circularchy is a network of equational bonds between sub-microlects. A circularchy is solvable if there exist solutions for all equations that satisfy all equational bonds. If a circularchy is not solvable, then a challenge would be to discover the obstruction to solvability and then conjecture what adjustments might remove the obstruction. Adjustment means changes in stipulated ingredients (sorts, etc.) of sub-microlects, or changes in equational bonds between sub-microlects, or introduction of new sub-microlects and new equational bonds. A circularchy is modular insofar as each sub-microlect is a node in a network of equation bonds. Solvability of a circularchy may be conjectured. Efforts to prove solvability may be thwarted by a counter-example or may lead to the construction of a solution. An automated theorem-proof assistant would likely be necessary for investigating a substantial circularchy, such as one purported to represent Earthly Life. Such investigations (chains of statements) would be concurrent with and no substitute for simulations (chains of numbers).

Keywords: autonomy, first-order theory, mathematics, thermodynamics

Procedia PDF Downloads 220
4 Genome-Wide Homozygosity Analysis of the Longevous Phenotype in the Amish Population

Authors: Sandra Smieszek, Jonathan Haines

Abstract:

Introduction: Numerous research efforts have focused on searching for ‘longevity genes’. However, attempting to decipher the genetic component of the longevous phenotype have resulted in limited success and the mechanisms governing longevity remain to be explained. We conducted a genome-wide homozygosity analysis (GWHA) of the founder population of the Amish community in central Ohio. While genome-wide association studies using unrelated individuals have revealed many interesting longevity associated variants, these variants are typically of small effect and cannot explain the observed patterns of heritability for this complex trait. The Amish provide a large cohort of extended kinships allowing for in depth analysis via family-based approach excellent population due to its. Heritability of longevity increases with age with significant genetic contribution being seen in individuals living beyond 60 years of age. In our present analysis we show that the heritability of longevity is estimated to be increasing with age particularly on the paternal side. Methods: The present analysis integrated both phenotypic and genotypic data and led to the discovery of a series of variants, distinct for stratified populations across ages and distinct for paternal and maternal cohorts. Specifically 5437 subjects were analyzed and a subset of 893 successfully genotyped individuals was used to assess CHIP heritability. We have conducted the homozygosity analysis to examine if homozygosity is associated with increased risk of living beyond 90. We analyzed AMISH cohort genotyped for 614,957 SNPs. Results: We delineated 10 significant regions of homozygosity (ROH) specific for the age group of interest (>90). Of particular interest was ROH on chromosome 13, P < 0.0001. The lead SNPs rs7318486 and rs9645914 point to COL4A2 and our lead SNP. COL25A1 encodes one of the six subunits of type IV collagen, the C-terminal portion of the protein, known as canstatin, is an inhibitor of angiogenesis and tumor growth. COL4A2 mutations have been reported with a broader spectrum of cerebrovascular, renal, ophthalmological, cardiac, and muscular abnormalities. The second region of interest points to IRS2. Furthermore we built a classifier using the obtained SNPs from the significant ROH region with 0.945 AUC giving ability to discriminate between those living beyond to 90 years of age and beyond. Conclusion: In conclusion our results suggest that a history of longevity does indeed contribute to increasing the odds of individual longevity. Preliminary results are consistent with conjecture that heritability of longevity is substantial when we start looking at oldest fifth and smaller percentiles of survival specifically in males. We will validate all the candidate variants in independent cohorts of centenarians, to test whether they are robustly associated with human longevity. The identified regions of interest via ROH analysis could be of profound importance for the understanding of genetic underpinnings of longevity.

Keywords: regions of homozygosity, longevity, SNP, Amish

Procedia PDF Downloads 232
3 Co-Culture with Murine Stromal Cells Enhances the In-vitro Expansion of Hematopoietic Stem Cells in Response to Low Concentrations of Trans-Resveratrol

Authors: Mariyah Poonawala, Selvan Ravindran, Anuradha Vaidya

Abstract:

Despite much progress in understanding the regulatory factors and cytokines that support the maturation of the various cell lineages of the hematopoietic system, factors that govern the self-renewal and proliferation of hematopoietic stem cells (HSCs) is still a grey area of research. Hematopoietic stem cell transplantation (HSCT) has evolved over the years and gained tremendous importance in the treatment of both malignant and non-malignant diseases. However, factors such as graft rejection and multiple organ failure have challenged HSCT from time to time, underscoring the urgent need for development of milder processes for successful hematopoietic transplantation. An emerging concept in the field of stem cell biology states that the interactions between the bone-marrow micro-environment and the hematopoietic stem and progenitor cells is essential for regulation, maintenance, commitment and proliferation of stem cells. Understanding the role of mesenchymal stromal cells in modulating the functionality of HSCs is, therefore, an important area of research. Trans-resveratrol has been extensively studied for its various properties to combat and prevent cancer, diabetes and cardiovascular diseases etc. The aim of the present study was to understand the effect of trans-resveratrol on HSCs using single and co-culture systems. We have used KG1a cells since it is a well accepted hematopoietic stem cell model system. Our preliminary experiments showed that low concentrations of trans-resveratrol stimulated the HSCs to undergo proliferation whereas high concentrations of trans-resveratrol did not stimulate the cells to proliferate. We used a murine fibroblast cell line, M210B4, as a stromal feeder layer. On culturing the KG1a cells with M210B4 cells, we observed that the stimulatory as well as inhibitory effects of trans-resveratrol at low and high concentrations respectively, were enhanced. Our further experiments showed that low concentration of trans-resveratrol reduced the generation of reactive oxygen species (ROS) and nitric oxide (NO) whereas high concentrations increased the oxidative stress in KG1a cells. We speculated that perhaps the oxidative stress was imposing inhibitory effects at high concentration and the same was confirmed by performing an apoptotic assay. Furthermore, cell cycle analysis and growth kinetic experiments provided evidence that low concentration of trans-resveratrol reduced the doubling time of the cells. Our hypothesis is that perhaps at low concentration of trans-resveratrol the cells get pushed into the G0/G1 phase and re-enter the cell cycle resulting in their proliferation, whereas at high concentration the cells are perhaps arrested at G2/M phase or at cytokinesis and therefore undergo apoptosis. Liquid Chromatography-Quantitative-Time of Flight–Mass Spectroscopy (LC-Q-TOF MS) analyses indicated the presence of trans-resveratrol and its metabolite(s) in the supernatant of the co-cultured cells incubated with high concentration of trans-resveratrol. We conjecture that perhaps the metabolites of trans-resveratrol are responsible for the apoptosis observed at the high concentration. Our findings may shed light on the unsolved problems in the in vitro expansion of stem cells and may have implications in the ex vivo manipulation of HSCs for therapeutic purposes.

Keywords: co-culture system, hematopoietic micro-environment, KG1a cell line, M210B4 cell line, trans-resveratrol

Procedia PDF Downloads 256
2 Medical Examiner Collection of Comprehensive, Objective Medical Evidence for Conducted Electrical Weapons and Their Temporal Relationship to Sudden Arrest

Authors: Michael Brave, Mark Kroll, Steven Karch, Charles Wetli, Michael Graham, Sebastian Kunz, Dorin Panescu

Abstract:

Background: Conducted electrical weapons (CEW) are now used in 107 countries and are a common law enforcement less-lethal force practice in the United Kingdom (UK), United States of America (USA), Canada, Australia, New Zealand, and others. Use of these devices is rarely temporally associated with the occurrence of sudden arrest-related deaths (ARD). Because such deaths are uncommon, few Medical Examiners (MEs) ever encounter one, and even fewer offices have established comprehensive investigative protocols. Without sufficient scientific data, the role, if any, played by a CEW in a given case is largely supplanted by conjecture often defaulting to a CEW-induced fatal cardiac arrhythmia. In addition to the difficulty in investigating individual deaths, the lack of information also detrimentally affects being able to define and evaluate the ARD cohort generally. More comprehensive, better information leads to better interpretation in individual cases and also to better research. The purpose of this presentation is to provide MEs with a comprehensive evidence-based checklist to assist in the assessment of CEW-ARD cases. Methods: PUBMED and Sociology/Criminology data bases were queried to find all medical, scientific, electrical, modeling, engineering, and sociology/criminology peer-reviewed literature for mentions of CEW or synonymous terms. Each paper was then individually reviewed to identify those that discussed possible bioelectrical mechanisms relating CEW to ARD. A Naranjo-type pharmacovigilance algorithm was also employed, when relevant, to identify and quantify possible direct CEW electrical myocardial stimulation. Additionally, CEW operational manuals and training materials were reviewed to allow incorporation of CEW-specific technical parameters. Results: Total relevant PUBMED citations of CEWs were less than 250, and reports of death extremely rare. Much relevant information was available from Sociology/Criminology data bases. Once the relevant published papers were identified, and reviewed, we compiled an annotated checklist of data that we consider critical to a thorough CEW-involved ARD investigation. Conclusion: We have developed an evidenced-based checklist that can be used by MEs and their staffs to assist them in identifying, collecting, documenting, maintaining, and objectively analyzing the role, if any, played by a CEW in any specific case of sudden death temporally associated with the use of a CEW. Even in cases where the collected information is deemed by the ME as insufficient for formulating an opinion or diagnosis to a reasonable degree of medical certainty, information collected as per the checklist will often be adequate for other stakeholders to use as a basis for informed decisions. Having reviewed the appropriate materials in a significant number of cases careful examination of the heart and brain is likely adequate. Channelopathy testing should be considered in some cases, however it may be considered cost prohibitive (aprox $3000). Law enforcement agencies may want to consider establishing a reserve fund to help manage such rare cases. The expense may stay the enormous costs associated with incident-precipitated litigation.

Keywords: ARD, CEW, police, TASER

Procedia PDF Downloads 346
1 Financial Policies in the Process of Global Crisis: Case Study Kosovo, Case Kosovo

Authors: Shpetim Rezniqi

Abstract:

Financial Policies in the process of global crisis the current crisis has swept the world with special emphasis, most developed countries, those countries which have most gross -product world and you have a high level of living.Even those who are not experts can describe the consequences of the crisis to see the reality that is seen, but how far will it go this crisis is impossible to predict. Even the biggest experts have conjecture and large divergence, but agree on one thing: - The devastating effects of this crisis will be more severe than ever before and can not be predicted.Long time, the world was dominated economic theory of free market laws. With the belief that the market is the regulator of all economic problems. The market, as river water will flow to find the best and will find the necessary solution best. Therefore much less state market barriers, less state intervention and market itself is an economic self-regulation. Free market economy became the model of global economic development and progress, it transcends national barriers and became the law of the development of the entire world economy. Globalization and global market freedom were principles of development and international cooperation. All international organizations like the World Bank, states powerful economic, development and cooperation principles laid free market economy and the elimination of state intervention. The less state intervention much more freedom of action was this market- leading international principle. We live in an era of financial tragic. Financial markets and banking in particular economies are in a state of thy good, US stock markets fell about 40%, in other words, this time, was one of the darkest moments 5 since 1920. Prior to her rank can only "collapse" of the stock of Wall Street in 1929, technological collapse of 2000, the crisis of 1973 after the Yom Kippur war, while the price of oil quadrupled and famous collapse of 1937 / '38, when Europe was beginning World war II In 2000, even though it seems like the end of the world was the corner, the world economy survived almost intact. Of course, that was small recessions in the United States, Europe, or Japan. Much more difficult the situation was at crisis 30s, or 70s, however, succeeded the world. Regarding the recent financial crisis, it has all the signs to be much sharper and with more consequences. The decline in stock prices is more a byproduct of what is really happening. Financial markets began dance of death with the credit crisis, which came as a result of the large increase in real estate prices and household debt. It is these last two phenomena can be matched very well with the gains of the '20s, a period during which people spent fists as if there was no tomorrow. All is not away from the mouth of the word recession, that fact no longer a sudden and abrupt. But as much as the financial markets melt, the greater is the risk of a problematic economy for years to come. Thus, for example, the banking crisis in Japan proved to be much more severe than initially expected, partly because the assets which were based more loans had, especially the land that falling in value. The price of land in Japan is about 15 years that continues to fall. (ADRI Nurellari-Published in the newspaper "Classifieds"). At this moment, it is still difficult to çmosh to what extent the crisis has affected the economy and what would be the consequences of the crisis. What we know is that many banks will need more time to reduce the award of credit, but banks have this primary function, this means huge loss.

Keywords: globalisation, finance, crisis, recomandation, bank, credits

Procedia PDF Downloads 389