Search results for: loss distribution approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20267

Search results for: loss distribution approach

16367 Lies and Pretended Fairness of Police Officers in Sharing

Authors: Eitan Elaad

Abstract:

The current study aimed to examine lying and pretended fairness by police personnel in sharing situations. Forty Israeli police officers and 40 laypeople from the community, all males, self-assessed their lie-telling ability, rated the frequency of their lies, evaluated the acceptability of lying, and indicated using rational and intuitive thinking while lying. Next, according to the ultimatum game procedure, participants were asked to share 100 points with an imagined target, either a male policeman or a male non-policeman. Participants allocated points to the target person bearing in mind that the other person must accept or reject their offer. Participants' goal was to retain as many points as possible, and to this end, they could tell the target person that fewer than 100 points were available for distribution. We defined concealment or lying as the difference between the available 100 points and the sum of points designated for sharing. Results indicated that police officers lied less to their fellow police targets than non-police targets, whereas laypeople lied less to non-police targets than imagined police targets. The ratio between the points offered to the imagined target person and the points endowed by the participant as available for sharing defined pretended fairness.Enhanced pretended fairness indicates higher motivation to display fair sharing even if the fair sharing is fictitious. Police officers presented higher pretended fairness to police targets than laypeople, whereas laypeople set off more fairness to non-police targets than police officers. We discussed the results concerning occupation solidarity and loyalty among police personnel. Specifically, police work involves uncertainty, danger and risk, coercive authority, and the use of force, which isolates the police from the community and dictates strong bonds of solidarity between police personnel. No wonder police officers shared more points (lied less) to fellow police targets than non-police targets. On the other hand, police legitimacy or the belief that the police are acting honestly in the best interest of the citizens constitutes citizens' attitudes toward the police. The relatively low number of points shared for distribution by laypeople to police targets indicates difficulties with the legitimacy of the Israeli police.

Keywords: lying, fairness, police solidarity, police legitimacy, sharing, ultimatum game

Procedia PDF Downloads 104
16366 Profit Share in Income: An Analysis of Its Influence on Macroeconomic Performance

Authors: Alain Villemeur

Abstract:

The relationships between the profit share in income on the one hand and the growth rates of output and employment on the other hand have been studied for 17 advanced economies since 1961. The vast majority (98%) of annual values for the profit share fall between 20% and 40%, with an average value of 33.9%. For the 17 advanced economies, Gross Domestic Product and productivity growth rates tend to fall as the profit share in income rises. For the employment growth rates, the relationships are complex; nevertheless, over long periods (1961-2000), it appears that the more job-creating economies are Australia, Canada, and the United States; they have experienced a profit share close to 1/3. This raises a number of questions, not least the value of 1/3 for the profit share and its role in macroeconomic fundamentals. To explain these facts, an endogenous growth model is developed. This growth and distribution model reconciles the great ideas of Kaldor (economic growth as a chain reaction), of Keynes (effective demand and marginal efficiency of capital) and of Ricardo (importance of the wage-profit distribution) in an economy facing creative destruction. A production function is obtained, depending mainly on the growth of employment, the rate of net investment and the profit share in income. In theory, we show the existence of incentives: an incentive for job creation when the profit share is less than 1/3 and another incentive for job destruction in the opposite case. Thus, increasing the profit share can boost the employment growth rate until it reaches the value of 1/3; otherwise lowers the employment growth rate. Three key findings can be drawn from these considerations. The first reveals that the best GDP and productivity growth rates are obtained with a profit share of less than 1/3. The second is that maximum job growth is associated with a 1/3 profit share, given the existence of incentives to create more jobs when the profit share is less than 1/3 or to destroy more jobs otherwise. The third is the decline in performance (GDP growth rate and productivity growth rate) when the profit share increases. In conclusion, increasing the profit share in income weakens GDP growth or productivity growth as a long-term trend, contrary to the trickle-down hypothesis. The employment growth rate is maximum for a profit share in income of 1/3. All these lessons suggest macroeconomic policies considering the profit share in income.

Keywords: advanced countries, GDP growth, employment growth, profit share, economic policies

Procedia PDF Downloads 46
16365 Corporate Governance and Corporate Social Responsibility: Research on the Interconnection of Both Concepts and Its Impact on Non-Profit Organizations

Authors: Helene Eller

Abstract:

The aim of non-profit organizations (NPO) is to provide services and goods for its clientele, with profit being a minor objective. By having this definition as the basic purpose of doing business, it is obvious that the goal of an organisation is to serve several bottom lines and not only the financial one. This approach is underpinned by the non-distribution constraint which means that NPO are allowed to make profits to a certain extent, but not to distribute them. The advantage is that there are no single shareholders who might have an interest in the prosperity of the organisation: there is no pie to divide. The gained profits remain within the organisation and will be reinvested in purposeful projects. Good governance is mandatory to support the aim of NPOs. Looking for a measure of good governance the principals of corporate governance (CG) will come in mind. The purpose of CG is direction and control, and in the field of NPO, CG is enlarged to consider the relationship to all important stakeholders who have an impact on the organisation. The recognition of more relevant parties than the shareholder is the link to corporate social responsibility (CSR). It supports a broader view of the bottom line: It is no longer enough to know how profits are used but rather how they are made. Besides, CSR addresses the responsibility of organisations for their impact on society. When transferring the concept of CSR to the non-profit area it will become obvious that CSR with its distinctive features will match the aims of NPOs. As a consequence, NPOs who apply CG apply also CSR to a certain extent. The research is designed as a comprehensive theoretical and empirical analysis. First, the investigation focuses on the theoretical basis of both concepts. Second, the similarities and differences are outlined and as a result the interconnection of both concepts will show up. The contribution of this research is manifold: The interconnection of both concepts when applied to NPOs has not got any attention in science yet. CSR and governance as integrated concept provides a lot of advantages for NPOs compared to for-profit organisations which are in a steady justification to show the impact they might have on the society. NPOs, however, integrate economic and social aspects as starting point. For NPOs CG is not a mere concept of compliance but rather an enhanced concept integrating a lot of aspects of CSR. There is no “either-nor” between the concepts for NPOs.

Keywords: business ethics, corporate governance, corporate social responsibility, non-profit organisations

Procedia PDF Downloads 228
16364 Exploring a Cross-Sectional Analysis Defining Social Work Leadership Competencies in Social Work Education and Practice

Authors: Trevor Stephen, Joshua D. Aceves, David Guyer, Jona Jacobson

Abstract:

As a profession, social work has much to offer individuals, groups, and organizations. A multidisciplinary approach to understanding and solving complex challenges and a commitment to developing and training ethical practitioners outlines characteristics of a profession embedded with leadership skills. This presentation will take an overview of the historical context of social work leadership, examine social work as a unique leadership model composed of its qualities and theories that inform effective leadership capability as it relates to our code of ethics. Reflect critically on leadership theories and their foundational comparison. Finally, a look at recommendations and implementation to social work education and practice. Similar to defining leadership, there is no universally accepted definition of social work leadership. However, some distinct traits and characteristics are essential. Recent studies help set the stage for this research proposal because they measure views on effective social work leadership among social work and non-social leaders and followers. However, this research is interested in working backward from that approach and examining social workers' leadership preparedness perspectives based solely on social work training, competencies, values, and ethics. Social workers understand how to change complex structures and challenge resistance to change to improve the well-being of organizations and those they serve. Furthermore, previous studies align with the idea of practitioners assessing their skill and capacity to engage in leadership but not to lead. In addition, this research is significant because it explores aspiring social work leaders' competence to translate social work practice into direct leadership skills. The research question seeks to answer whether social work training and competencies are sufficient to determine whether social workers believe they possess the capacity and skill to engage in leadership practice. Aim 1: Assess whether social workers have the capacity and skills to assume leadership roles. Aim 2: Evaluate how the development of social workers is sufficient in defining leadership. This research intends to reframe the misconception that social workers do not possess the capacity and skills to be effective leaders. On the contrary, social work encompasses a framework dedicated to lifelong development and growth. Social workers must be skilled, competent, ethical, supportive, and empathic. These are all qualities and traits of effective leadership, whereas leaders are in relation with others and embody partnership and collaboration with followers and stakeholders. The proposed study is a cross-sectional quasi-experimental survey design that will include the distribution of a multi-level social work leadership model and assessment tool. The assessment tool aims to help define leadership in social work using a Likert scale model. A cross-sectional research design is appropriate for answering the research questions because the measurement survey will help gather data using a structured tool. Other than the proposed social work leadership measurement tool, there is no other mechanism based on social work theory and designed to measure the capacity and skill of social work leadership.

Keywords: leadership competencies, leadership education, multi-level social work leadership model, social work core values, social work leadership, social work leadership education, social work leadership measurement tool

Procedia PDF Downloads 154
16363 Situated Urban Rituals: Rethinking the Meaning and Practice of Micro Culture in Cities in East Asia

Authors: Heide Imai

Abstract:

Contemporary cities, especially in Japan, have reached an indescribable complexity and excessive, global investments blur formal, rooted structures. Modern urban agglomerations blindly trust a macro understanding, whereas everyday activities which portray the human degree of living space are being suppressed and erased. The paper will draw upon the approach ‘Micro-Urbanism’ which focus on the sensitive and indigenous side of contemporary cities, which in fact can hold the authentic qualities of a city. Related to this approach is the term ‘Micro-Culture’ which is used to clarify the inner realities of the everyday living space on the example of the Japanese urban backstreet. The paper identifies an example of a ‘micro-zone’ in terms of ‘street space’, originally embedded in the landscape of the Japanese city. And although the approach ‘Micro-Urbanism’ is more complex, the understanding of the term can be tackled by a social analysis of the street, as shown on the backstreet called roji and closely linked examples of ‘situated’ urban rituals like (1) urban festivities, (2) local markets/ street vendors and (3) artistic, intellectual tactics. Likewise, the paper offers insights in a ‘community of streets’ which boundaries are specially shaped by cultural activity and social networks.

Keywords: urban rituals, community, streets as micro-zone, everyday space

Procedia PDF Downloads 292
16362 Influence of Irregularities in Plan and Elevation

Authors: Houmame Benbouali

Abstract:

Some architectural conditions required some shapes often lead to an irregular distribution of masses, rigidities and resistances. The main object of the present study consists in estimating the influence of the irregularity both in plan and in elevation which presenting some structures on the dynamic characteristics and his influence on the behavior of this structures. To do this, it is necessary to apply both dynamic methods proposed by the RPA99 (spectral modal method and method of analysis by accelerogram) on certain similar prototypes and to analyze the parameters measuring the answer of these structures and to proceed to a comparison of the results.

Keywords: irregularity, seismic, response, structure, ductility

Procedia PDF Downloads 358
16361 On q-Non-extensive Statistics with Non-Tsallisian Entropy

Authors: Petr Jizba, Jan Korbel

Abstract:

We combine an axiomatics of Rényi with the q-deformed version of Khinchin axioms to obtain a measure of information (i.e., entropy) which accounts both for systems with embedded self-similarity and non-extensivity. We show that the entropy thus obtained is uniquely solved in terms of a one-parameter family of information measures. The ensuing maximal-entropy distribution is phrased in terms of a special function known as the Lambert W-function. We analyze the corresponding ‘high’ and ‘low-temperature’ asymptotics and reveal a non-trivial structure of the parameter space.

Keywords: multifractals, Rényi information entropy, THC entropy, MaxEnt, heavy-tailed distributions

Procedia PDF Downloads 427
16360 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers

Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek

Abstract:

Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.

Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations

Procedia PDF Downloads 116
16359 Sliding Mode Control of an Internet Teleoperated PUMA 600 Robot

Authors: Abdallah Ghoul, Bachir Ouamri, Ismail Khalil Bousserhane

Abstract:

In this paper, we have developed a sliding mode controller for PUMA 600 manipulator robot, to control the remote robot a teleoperation system was developed. This system includes two sites, local and remote. The sliding mode controller is installed at the remote site. The client asks for a position through an interface and receives the real positions after running of the task by the remote robot. Both sites are interconnected via the Internet. In order to verify the effectiveness of the sliding mode controller, that is compared with a classic PID controller. The developed approach is tested on a virtual robot. The results confirmed the high performance of this approach.

Keywords: internet, manipulator robot, PID controller, remote control, sliding mode, teleoperation

Procedia PDF Downloads 309
16358 A New Approach to the Digital Implementation of Analog Controllers for a Power System Control

Authors: G. Shabib, Esam H. Abd-Elhameed, G. Magdy

Abstract:

In this paper, a comparison of discrete time PID, PSS controllers is presented through small signal stability of power system comprising of one machine connected to infinite bus system. This comparison achieved by using a new approach of discretization which converts the S-domain model of analog controllers to a Z-domain model to enhance the damping of a single machine power system. The new method utilizes the Plant Input Mapping (PIM) algorithm. The proposed algorithm is stable for any sampling rate, as well as it takes the closed loop characteristic into consideration. On the other hand, the traditional discretization methods such as Tustin’s method is produce satisfactory results only; when the sampling period is sufficiently low.

Keywords: PSS, power system stabilizer PID, proportional-integral-derivative PIM, plant input mapping

Procedia PDF Downloads 493
16357 Wet Sliding Wear and Frictional Behavior of Commercially Available Perspex

Authors: S. Reaz Ahmed, M. S. Kaiser

Abstract:

The tribological behavior of commercially used Perspex was evaluated under dry and wet sliding condition using a pin-on-disc wear tester with different applied loads ranging from 2.5 to 20 N. Experiments were conducted with varying sliding distance from 0.2 km to 4.6 km, wherein the sliding velocity was kept constant, 0.64 ms-1. The results reveal that the weight loss increases with applied load and the sliding distance. The nature of the wear rate was very similar in both the sliding environments in which initially the wear rate increased very rapidly with increasing sliding distance and then progressed to a slower rate. Moreover, the wear rate in wet sliding environment was significantly lower than that under dry sliding condition. The worn surfaces were characterized by optical microscope and SEM. It is found that surface modification has significant effect on sliding wear performance of Perspex.

Keywords: Perspex, wear, friction, SEM

Procedia PDF Downloads 259
16356 An Efficient Approach for Speed up Non-Negative Matrix Factorization for High Dimensional Data

Authors: Bharat Singh Om Prakash Vyas

Abstract:

Now a day’s applications deal with High Dimensional Data have tremendously used in the popular areas. To tackle with such kind of data various approached has been developed by researchers in the last few decades. To tackle with such kind of data various approached has been developed by researchers in the last few decades. One of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, we have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. We have devised an algorithm for initializing the values of the decomposed matrix based on the PSO (Particle Swarm Optimization). Through the experimental result, we will show the proposed method converse very fast in comparison to other row rank approximation like simple NMF multiplicative, and ACLS techniques.

Keywords: ALS, NMF, high dimensional data, RMSE

Procedia PDF Downloads 329
16355 An Adaptive Conversational AI Approach for Self-Learning

Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo

Abstract:

In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.

Keywords: conversational AI, chatbot, dialog management, semantic analysis

Procedia PDF Downloads 123
16354 Similarity Solutions of Nonlinear Stretched Biomagnetic Flow and Heat Transfer with Signum Function and Temperature Power Law Geometries

Authors: M. G. Murtaza, E. E. Tzirtzilakis, M. Ferdows

Abstract:

Biomagnetic fluid dynamics is an interdisciplinary field comprising engineering, medicine, and biology. Bio fluid dynamics is directed towards finding and developing the solutions to some of the human body related diseases and disorders. This article describes the flow and heat transfer of two dimensional, steady, laminar, viscous and incompressible biomagnetic fluid over a non-linear stretching sheet in the presence of magnetic dipole. Our model is consistent with blood fluid namely biomagnetic fluid dynamics (BFD). This model based on the principles of ferrohydrodynamic (FHD). The temperature at the stretching surface is assumed to follow a power law variation, and stretching velocity is assumed to have a nonlinear form with signum function or sign function. The governing boundary layer equations with boundary conditions are simplified to couple higher order equations using usual transformations. Numerical solutions for the governing momentum and energy equations are obtained by efficient numerical techniques based on the common finite difference method with central differencing, on a tridiagonal matrix manipulation and on an iterative procedure. Computations are performed for a wide range of the governing parameters such as magnetic field parameter, power law exponent temperature parameter, and other involved parameters and the effect of these parameters on the velocity and temperature field is presented. It is observed that for different values of the magnetic parameter, the velocity distribution decreases while temperature distribution increases. Besides, the finite difference solutions results for skin-friction coefficient and rate of heat transfer are discussed. This study will have an important bearing on a high targeting efficiency, a high magnetic field is required in the targeted body compartment.

Keywords: biomagnetic fluid, FHD, MHD, nonlinear stretching sheet

Procedia PDF Downloads 146
16353 Applied Actuator Fault Accommodation in Flight Control Systems Using Fault Reconstruction Based FDD and SMC Reconfiguration

Authors: A. Ghodbane, M. Saad, J. F. Boland, C. Thibeault

Abstract:

Historically, actuators’ redundancy was used to deal with faults occurring suddenly in flight systems. This technique was generally expensive, time consuming and involves increased weight and space in the system. Therefore, nowadays, the on-line fault diagnosis of actuators and accommodation plays a major role in the design of avionic systems. These approaches, known as Fault Tolerant Flight Control systems (FTFCs) are able to adapt to such sudden faults while keeping avionics systems lighter and less expensive. In this paper, a (FTFC) system based on the Geometric Approach and a Reconfigurable Flight Control (RFC) are presented. The Geometric approach is used for cosmic ray fault reconstruction, while Sliding Mode Control (SMC) based on Lyapunov stability theory is designed for the reconfiguration of the controller in order to compensate the fault effect. Matlab®/Simulink® simulations are performed to illustrate the effectiveness and robustness of the proposed flight control system against actuators’ faulty signal caused by cosmic rays. The results demonstrate the successful real-time implementation of the proposed FTFC system on a non-linear 6 DOF aircraft model.

Keywords: actuators’ faults, fault detection and diagnosis, fault tolerant flight control, sliding mode control, geometric approach for fault reconstruction, Lyapunov stability

Procedia PDF Downloads 395
16352 The Analysis of Secondary Case Studies as a Starting Point for Grounded Theory Studies: An Example from the Enterprise Software Industry

Authors: Abilio Avila, Orestis Terzidis

Abstract:

A fundamental principle of Grounded Theory (GT) is to prevent the formation of preconceived theories. This implies the need to start a research study with an open mind and to avoid being absorbed by the existing literature. However, to start a new study without an understanding of the research domain and its context can be extremely challenging. This paper presents a research approach that simultaneously supports a researcher to identify and to focus on critical areas of a research project and prevent the formation of prejudiced concepts by the current body of literature. This approach comprises of four stages: Selection of secondary case studies, analysis of secondary case studies, development of an initial conceptual framework, development of an initial interview guide. The analysis of secondary case studies as a starting point for a research project allows a researcher to create a first understanding of a research area based on real-world cases without being influenced by the existing body of theory. It enables a researcher to develop through a structured course of actions a firm guide that establishes a solid starting point for further investigations. Thus, the described approach may have significant implications for GT researchers who aim to start a study within a given research area.

Keywords: grounded theory, interview guide, qualitative research, secondary case studies, secondary data analysis

Procedia PDF Downloads 249
16351 Value Index, a Novel Decision Making Approach for Waste Load Allocation

Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani

Abstract:

Waste load allocation (WLA) policies may use multi-objective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.

Keywords: waste load allocation (WLA), value index, multi objective particle swarm optimization (MOPSO), Haraz River, equity

Procedia PDF Downloads 408
16350 Close-Out Netting Clauses from a Comparative Perspective

Authors: Lidija Simunovic

Abstract:

A Close-out netting cause is a clause within master agreements which reduces credit risks. This clause contains the parties ' advance agreement that the occurrence of a certain event (such as the commencement of bankruptcy proceedings) will result in the termination of the contract and that their mutual claims will be calculated as a net lump-sum to be paid by one party to the other. The legal treatment of the enforceability of close-out netting clauses opens up many legal matters in comparative legal systems because it is not uniformly treated in comparative laws. Certain legal systems take a liberal approach and allow the enforcement of close-out netting clauses. Others are much stricter, and they limit or completely prohibit the enforcement of close-out netting clauses through the mandatory provisions of their national bankruptcy laws. The author analyzes the concept of close-out netting clauses in selected comparative legal systems and examines the differences in their legal treatment by using the historical, analytical, and comparative method. It results that special treatment of the close-out netting in national laws with a liberal approach is often forced by financial industry lobbies and introduced in national laws without the justified reasons. Contrary to that in legal systems with limited or prohibited approach on close-out netting the uncertain enforceability of the close-out netting clause causes potential credit risks. The detected discrepancy on the national legal treatment and national financial markets regarding close-out netting lead to the conclusion to author’s best knowledge that is not possible to use any national model of close-out netting as a role model which perfectly fits all.

Keywords: close-out netting clauses, derivatives, insolvency, offsetting

Procedia PDF Downloads 133
16349 Low Enrollment in Civil Engineering Departments: Challenges and Opportunities

Authors: Alaa Yehia, Ayatollah Yehia, Sherif Yehia

Abstract:

There is a recurring issue of low enrollments across many civil engineering departments in postsecondary institutions. While there have been moments where enrollments begin to increase, civil engineering departments find themselves facing low enrollments at around 60% over the last five years across the Middle East. There are many reasons that could be attributed to this decline, such as low entry-level salaries, over-saturation of civil engineering graduates in the job market, and a lack of construction projects due to the impending or current recession. However, this recurring problem alludes to an intrinsic issue of the curriculum. The societal shift to the usage of high technology such as machine learning (ML) and artificial intelligence (AI) demands individuals who are proficient at utilizing it. Therefore, existing curriculums must adapt to this change in order to provide an education that is suitable for potential and current students. In this paper, In order to provide potential solutions for this issue, the analysis considers two possible implementations of high technology into the civil engineering curriculum. The first approach is to implement a course that introduces applications of high technology in Civil Engineering contexts. While the other approach is to intertwine applications of high technology throughout the degree. Both approaches, however, should meet requirements of accreditation agencies. In addition to the proposed improvement in civil engineering curriculum, a different pedagogical practice must be adapted as well. The passive learning approach might not be appropriate for Gen Z students; current students, now more than ever, need to be introduced to engineering topics and practice following different learning methods to ensure they will have the necessary skills for the job market. Different learning methods that incorporate high technology applications, like AI, must be integrated throughout the curriculum to make the civil engineering degree more attractive to prospective students. Moreover, the paper provides insight on the importance and approach of adapting the Civil Engineering curriculum to address the current low enrollment crisis that civil engineering departments globally, but specifically in the Middle East, are facing.

Keywords: artificial intelligence (AI), civil engineering curriculum, high technology, low enrollment, pedagogy

Procedia PDF Downloads 144
16348 Microfacies Analysis, Depositional Environment, and Diagentic Process of the Antalo Limestone Successions in the Mekelle Outlier (Hagere-Selam, Messobo and Wukro Sections), Northern Ethiopia

Authors: Werede Girmay Tesfasilasiea

Abstract:

Three stratigraphic sections of the Antalo Limestone successions in Mekelle Outlier, northern Ethiopia (at Hagere-Selam, Messobo, and Wukro sections) have been investigated to distinguish their microfacies features, reservoir characterization, and their equivalent depositional environments. The Antalo Limestone successions were deposited in the Mekelle Outlier during the Upper Jurassic period as a result of flooding of the area by the Tethys Ocean toward the southeast direction. This study is based on field description and petrographic analysis to determine the depositional environment, age, and reservoir characteristics of the carbonate units. According to petrographical studies of 100 thin sections and field investigation, 14 microfacies types are recognized. These are grouped into 4 microfacies association of a tidal flat (MFT1-2), lagoons (MFL1-2), shoal (MFS1-4), and open marine environment (MFO1-6). Hence, the Antalo limestone successions are deposited in shallow carbonate ramps with a wide lateral and vertical distribution of facies. The carbonate units in the studied sections are affected by bioturbation, micritization, cementation, dolomitization, dissolution, silicification, and compaction type of early diagenetic alteration. Dissolution and dolomitization affected the type of rock, showing good reservoir quality, while cementation and compaction affected the type of rock, resulting in poor reservoir quality in the Antalo Limestone successions of the Mekelle outlier. Based on the abundant distribution of the Alveosepta jaccardi (Schrodt), Pseudocyclammina lituus (Yokoyama), Kurnubia palestiniensis (Henson), and Somalirhynchia africana in the studied sections the Antalo Limestone successions assigned to the Late Oxfordian-Kimmeridgian age.

Keywords: Antelo limestone successions, depositional environment, Mekelle outlier, microfacies analysis, diagenesis, reservoir quality

Procedia PDF Downloads 24
16347 A Comparative Study of Regional Climate Models and Global Coupled Models over Uttarakhand

Authors: Sudip Kumar Kundu, Charu Singh

Abstract:

As a great physiographic divide, the Himalayas affecting a large system of water and air circulation which helps to determine the climatic condition in the Indian subcontinent to the south and mid-Asian highlands to the north. It creates obstacles by defending chill continental air from north side into India in winter and also defends rain-bearing southwesterly monsoon to give up maximum precipitation in that area in monsoon season. Nowadays extreme weather conditions such as heavy precipitation, cloudburst, flash flood, landslide and extreme avalanches are the regular happening incidents in the region of North Western Himalayan (NWH). The present study has been planned to investigate the suitable model(s) to find out the rainfall pattern over that region. For this investigation, selected models from Coordinated Regional Climate Downscaling Experiment (CORDEX) and Coupled Model Intercomparison Project Phase 5 (CMIP5) has been utilized in a consistent framework for the period of 1976 to 2000 (historical). The ability of these driving models from CORDEX domain and CMIP5 has been examined according to their capability of the spatial distribution as well as time series plot of rainfall over NWH in the rainy season and compared with the ground-based Indian Meteorological Department (IMD) gridded rainfall data set. It is noted from the analysis that the models like MIROC5 and MPI-ESM-LR from the both CORDEX and CMIP5 provide the best spatial distribution of rainfall over NWH region. But the driving models from CORDEX underestimates the daily rainfall amount as compared to CMIP5 driving models as it is unable to capture daily rainfall data properly when it has been plotted for time series (TS) individually for the state of Uttarakhand (UK) and Himachal Pradesh (HP). So finally it can be said that the driving models from CMIP5 are better than CORDEX domain models to investigate the rainfall pattern over NWH region.

Keywords: global warming, rainfall, CMIP5, CORDEX, NWH

Procedia PDF Downloads 154
16346 A User Identification Technique to Access Big Data Using Cloud Services

Authors: A. R. Manu, V. K. Agrawal, K. N. Balasubramanya Murthy

Abstract:

Authentication is required in stored database systems so that only authorized users can access the data and related cloud infrastructures. This paper proposes an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. The proposed technique is likely to be more robust as the probability of breaking the password is extremely low. This framework uses a multi-modal biometric approach and SMS to enforce additional security measures with the conventional Login/password system. The robustness of the technique is demonstrated mathematically using a statistical analysis. This work presents the authentication system along with the user authentication architecture diagram, activity diagrams, data flow diagrams, sequence diagrams, and algorithms.

Keywords: design, implementation algorithms, performance, biometric approach

Procedia PDF Downloads 458
16345 Approach-Avoidance Conflict in the T-Maze: Behavioral Validation for Frontal EEG Activity Asymmetries

Authors: Eva Masson, Andrea Kübler

Abstract:

Anxiety disorders (AD) are the most prevalent psychological disorders. However, far from most affected individuals are diagnosed and receive treatment. This gap is probably due to the diagnosis criteria, relying on symptoms (according to the DSM-5 definition) with no objective biomarker. Approach-avoidance conflict tasks are one common approach to simulate such disorders in a lab setting, with most of the paradigms focusing on the relationships between behavior and neurophysiology. Approach-avoidance conflict tasks typically place participants in a situation where they have to make a decision that leads to both positive and negative outcomes, thereby sending conflicting signals that trigger the Behavioral Inhibition System (BIS). Furthermore, behavioral validation of such paradigms adds credibility to the tasks – with overt conflict behavior, it is safer to assume that the task actually induced a conflict. Some of those tasks have linked asymmetrical frontal brain activity to induced conflicts and the BIS. However, there is currently no consensus for the direction of the frontal activation. The authors present here a modified version of the T-Maze paradigm, a motivational conflict desktop task, in which behavior is recorded simultaneously to the recording of high-density EEG (HD-EEG). Methods: In this within-subject design, HD-EEG and behavior of 35 healthy participants was recorded. EEG data was collected with a 128 channels sponge-based system. The motivational conflict desktop task consisted of three blocks of repeated trials. Each block was designed to record a slightly different behavioral pattern, to increase the chances of eliciting conflict. This variety of behavioral patterns was however similar enough to allow comparison of the number of trials categorized as ‘overt conflict’ between the blocks. Results: Overt conflict behavior was exhibited in all blocks, but always for under 10% of the trials, in average, in each block. However, changing the order of the paradigms successfully introduced a ‘reset’ of the conflict process, therefore providing more trials for analysis. As for the EEG correlates, the authors expect a different pattern for trials categorized as conflict, compared to the other ones. More specifically, we expect an elevated alpha frequency power in the left frontal electrodes at around 200ms post-cueing, compared to the right one (relative higher right frontal activity), followed by an inversion around 600ms later. Conclusion: With this comprehensive approach of a psychological mechanism, new evidence would be brought to the frontal asymmetry discussion, and its relationship with the BIS. Furthermore, with the present task focusing on a very particular type of motivational approach-avoidance conflict, it would open the door to further variations of the paradigm to introduce different kinds of conflicts involved in AD. Even though its application as a potential biomarker sounds difficult, because of the individual reliability of both the task and peak frequency in the alpha range, we hope to open the discussion for task robustness for neuromodulation and neurofeedback future applications.

Keywords: anxiety, approach-avoidance conflict, behavioral inhibition system, EEG

Procedia PDF Downloads 22
16344 Faculty and Students Perspectives of E-Learning at the University of Bahrain

Authors: Amira Abdulrazzaq

Abstract:

This paper is studying the opinion of faculty members and students about the future of education (e-learning) at the University of Bahrain. Through quantitative analysis a distribution of two surveys, one targeting students of IT College, and College of Arts and the other targeting Faculty members of both Colleges. Through the above survey, the paper measures the following factors: awareness and acceptance, satisfaction, usability, and usefulness. Results indicate positive reactions of all above factors.

Keywords: e-learning, education, moodle, WebCT

Procedia PDF Downloads 457
16343 A Quality Improvement Approach for Reducing Stigma and Discrimination against Young Key Populations in the Delivery of Sexual Reproductive Health and Rights Services

Authors: Atucungwiire Rwebiita

Abstract:

Introduction: In Uganda, provision of adolescent sexual reproductive health and rights (SRHR) services for key population is still hindered by negative attitudes, stigma and discrimination (S&D) at both the community and facility levels. To address this barrier, Integrated Community Based Initiatives (ICOBI) with support from SIDA is currently implementing a quality improvement (QI) innovative approach for strengthening the capacity of key population (KP) peer leaders and health workers to deliver friendly SRHR services without S&D. Methods: Our innovative approach involves continuous mentorship and coaching of 8 QI teams at 8 health facilities and their catchment areas. Each of the 8 teams (comprised of 5 health workers and 5 KP peer leaders) are facilitated twice a month by two QI Mentors in a 2-hour mentorship session over a period of 4 months. The QI mentors were provided a 2-weeks training on QI approaches for reducing S&D against young key populations in the delivery of SRHR Services. The mentorship sessions are guided by a manual where teams base to analyse root causes of S&D and develop key performance indicators (KPIs) in the 1st and 2nd second sessions respectively. The teams then develop action plans in the 3rd session and review implementation progress on KPIs at the end of subsequent sessions. The KPIs capture information on the attitude of health workers and peer leaders and the general service delivery setting as well as clients’ experience. A dashboard is developed to routinely track the KPIs for S&D across all the supported health facilities and catchment areas. After 4 months, QI teams share documented QI best practices and tested change packages on S&D in a learning and exchange session involving all the teams. Findings: The implementation of this approach is showing positive results. So far, QI teams have already identified the root causes of S&D against key populations including: poor information among health workers, fear of a perceived risk of infection, perceived links between HIV and disreputable behaviour. Others are perceptions that HIV & STIs are divine punishment, sex work and homosexuality are against religion and cultural values. They have also noted the perception that MSM are mentally sick and a danger to everyone. Eight QI teams have developed action plans to address the root causes of S&D. Conclusion: This approach is promising, offers a novel and scalable means to implement stigma-reduction interventions in facility and community settings.

Keywords: key populations, sexual reproductive health and rights, stigma and discrimination , quality improvement approach

Procedia PDF Downloads 151
16342 Optimization of an Electro-Submersible Pump for Crude Oil Extraction Processes

Authors: Deisy Becerra, Nicolas Rios, Miguel Asuaje

Abstract:

The Electrical Submersible Pump (ESP) is one of the most artificial lifting methods used in the last years, which consists of a serial arrangement of centrifugal pumps. One of the main concerns when handling crude oil is the formation of O/W or W/O (oil/water or water/oil) emulsions inside the pump, due to the shear rate imparted and the presence of high molecular weight substances that act as natural surfactants. Therefore, it is important to perform an analysis of the flow patterns inside the pump to increase the percentage of oil recovered using the centrifugal force and the difference in density between the oil and the water to generate the separation of liquid phases. For this study, a Computational Fluid Dynamic (CFD) model was developed on STAR-CCM+ software based on 3D geometry of a Franklin Electric 4400 4' four-stage ESP. In this case, the modification of the last stage was carried out to improve the centrifugal effect inside the pump, and a perforated double tube was designed with three different holes configurations disposed at the outlet section, through which the cut water flows. The arrangement of holes used has different geometrical configurations such as circles, rectangles, and irregular shapes determined as grating around the tube. The two-phase flow was modeled using an Eulerian approach with the Volume of Fluid (VOF) method, which predicts the distribution and movement of larger interfaces in immiscible phases. Different water-oil compositions were evaluated, such as 70-30% v/v, 80-20% v/v and 90-10% v/v, respectively. Finally, greater recovery of oil was obtained. For the several compositions evaluated, the volumetric oil fraction was greater than 0.55 at the pump outlet. Similarly, it is possible to show an inversely proportional relationship between the Water/Oil rate (WOR) and the volumetric flow. The volumetric fractions evaluated, the oil flow increased approximately between 41%-10% for circular perforations and 49%-19% for rectangular shaped perforations, regarding the inlet flow. Besides, the elimination of the pump diffuser in the last stage of the pump reduced the head by approximately 20%.

Keywords: computational fluid dynamic, CFD, electrical submersible pump, ESP, two phase flow, volume of fluid, VOF, water/oil rate, WOR

Procedia PDF Downloads 141
16341 Attitudes of the Indigenous People from Providencia, Amazon towards the Bora Language

Authors: Angela Maria Sarmiento

Abstract:

Since the end of the 19th century, the Bora people struggled to survive two stages of colonial domination, which resulted in situations of forced contact with the Western world. Their inclusion in global designs altered the configuration of their local spaces and social practices; thus the Bora language was affected and prone to transformation. This descriptive, interpretive study, within the indigenous and minoritized groups’ research field, aimed at analysing the linguistic attitudes as well as the contextual situation of the Bora language in Providencia, an ancestral territory and a speech community contained in the midst of the Colombian Amazon rainforest. Through the inquiry of their sociolinguistic practices, this study also considered the effects of the course of events derived from the rubber exploitation in the late 19th century, and the arrival of the Capuchin’s mission in the early 20th century. The methodology used in this study had an ethnographic approach, which allowed the researcher to study the social phenomena from the perspective of the participants. Fieldwork, diary, field notes, and semi-structured interviews were conducted and then triangulated with participant observations. The findings of this study suggest that there is a transition from current individual bilingualism towards Spanish monolingualism; this is enhanced by the absence of a functional distribution of the three varieties (Bora, Huitoto, and Spanish). Also, the positive attitudes towards the Spanish language are based on its functionality while positive attitudes towards the Bora language mostly refer to pride and identity. Negative attitudes are only directed towards the Bora language. In the search for the roots of these negative attitudes, appeared the traumatic experiences of the rubber exploitation and the indigenous experiences at the capuchin’s boarding school. Finally, the situation of the Bora language can be configured as a social fact strongly connected to previous years of colonial dominations and to the current and continuous incursion of new global-colonial designs.

Keywords: Bora language, language contact, linguistic attitudes, speech communities

Procedia PDF Downloads 133
16340 Investigation of Shear Strength, and Dilative Behavior of Coarse-grained Samples Using Laboratory Test and Machine Learning Technique

Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari

Abstract:

Coarse-grained soils are known and commonly used in a wide range of geotechnical projects, including high earth dams or embankments for their high shear strength. The most important engineering property of these soils is friction angle which represents the interlocking between soil particles and can be applied widely in designing and constructing these earth structures. Friction angle and dilative behavior of coarse-grained soils can be estimated from empirical correlations with in-situ testing and physical properties of the soil or measured directly in the laboratory performing direct shear or triaxial tests. Unfortunately, large-scale testing is difficult, challenging, and expensive and is not possible in most soil mechanic laboratories. So, it is common to remove the large particles and do the tests, which cannot be counted as an exact estimation of the parameters and behavior of the original soil. This paper describes a new methodology to simulate particles grading distribution of a well-graded gravel sample to a smaller scale sample as it can be tested in an ordinary direct shear apparatus to estimate the stress-strain behavior, friction angle, and dilative behavior of the original coarse-grained soil considering its confining pressure, and relative density using a machine learning method. A total number of 72 direct shear tests are performed in 6 different sizes, 3 different confining pressures, and 4 different relative densities. Multivariate Adaptive Regression Spline (MARS) technique was used to develop an equation in order to predict shear strength and dilative behavior based on the size distribution of coarse-grained soil particles. Also, an uncertainty analysis was performed in order to examine the reliability of the proposed equation.

Keywords: MARS, coarse-grained soil, shear strength, uncertainty analysis

Procedia PDF Downloads 150
16339 The Involvement of Visual and Verbal Representations Within a Quantitative and Qualitative Visual Change Detection Paradigm

Authors: Laura Jenkins, Tim Eschle, Joanne Ciafone, Colin Hamilton

Abstract:

An original working memory model suggested the separation of visual and verbal systems in working memory architecture, in which only visual working memory components were used during visual working memory tasks. It was later suggested that the visuo spatial sketch pad was the only memory component at use during visual working memory tasks, and components such as the phonological loop were not considered. In more recent years, a contrasting approach has been developed with the use of an executive resource to incorporate both visual and verbal representations in visual working memory paradigms. This was supported using research demonstrating the use of verbal representations and an executive resource in a visual matrix patterns task. The aim of the current research is to investigate the working memory architecture during both a quantitative and a qualitative visual working memory task. A dual task method will be used. Three secondary tasks will be used which are designed to hit specific components within the working memory architecture – Dynamic Visual Noise (visual components), Visual Attention (spatial components) and Verbal Attention (verbal components). A comparison of the visual working memory tasks will be made to discover if verbal representations are at use, as the previous literature suggested. This direct comparison has not been made so far in the literature. Considerations will be made as to whether a domain specific approach should be employed when discussing visual working memory tasks, or whether a more domain general approach could be used instead.

Keywords: semantic organisation, visual memory, change detection

Procedia PDF Downloads 577
16338 Designing Directed Network with Optimal Controllability

Authors: Liang Bai, Yandong Xiao, Haorang Wang, Songyang Lao

Abstract:

The directedness of links is crucial to determine the controllability in complex networks. Even the edge directions can determine the controllability of complex networks. Obviously, for a given network, we wish to design its edge directions that make this network approach the optimal controllability. In this work, we firstly introduce two methods to enhance network by assigning edge directions. However, these two methods could not completely mitigate the negative effects of inaccessibility and dilations. Thus, to approach the optimal network controllability, the edge directions must mitigate the negative effects of inaccessibility and dilations as much as possible. Finally, we propose the edge direction for optimal controllability. The optimal method has been found to be successfully useful on real-world and synthetic networks.

Keywords: complex network, dynamics, network control, optimization

Procedia PDF Downloads 159