Search results for: real gas model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20012

Search results for: real gas model

9872 Quantifying Wave Attenuation over an Eroding Marsh through Numerical Modeling

Authors: Donald G. Danmeier, Gian Marco Pizzo, Matthew Brennan

Abstract:

Although wetlands have been proposed as a green alternative to manage coastal flood hazards because of their capacity to adapt to sea level rise and provision of multiple ecological and social co-benefits, they are often overlooked due to challenges in quantifying the uncertainty and naturally, variability of these systems. This objective of this study was to quantify wave attenuation provided by a natural marsh surrounding a large oil refinery along the US Gulf Coast that has experienced steady erosion along the shoreward edge. The vegetation module of the SWAN was activated and coupled with a hydrodynamic model (DELFT3D) to capture two-way interactions between the changing water level and wavefield over the course of a storm event. Since the marsh response to relative sea level rise is difficult to predict, a range of future marsh morphologies is explored. Numerical results were examined to determine the amount of wave attenuation as a function of marsh extent and the relative contributions from white-capping, depth-limited wave breaking, bottom friction, and flexing of vegetation. In addition to the coupled DELFT3D-SWAN modeling of a storm event, an uncoupled SWAN-VEG model was applied to a simplified bathymetry to explore a larger experimental design space. The wave modeling revealed that the rate of wave attenuation reduces for higher surge but was still significant over a wide range of water levels and outboard wave heights. The results also provide insights to the minimum marsh extent required to fully realize the potential wave attenuation so the changing coastal hazards can be managed.

Keywords: green infrastructure, wave attenuation, wave modeling, wetland

Procedia PDF Downloads 117
9871 Modeling of Thermally Induced Acoustic Emission Memory Effects in Heterogeneous Rocks with Consideration for Fracture Develo

Authors: Vladimir A. Vinnikov

Abstract:

The paper proposes a model of an inhomogeneous rock mass with initially random distribution of microcracks on mineral grain boundaries. It describes the behavior of cracks in a medium under the effect of thermal field, the medium heated instantaneously to a predetermined temperature. Crack growth occurs according to the concept of fracture mechanics provided that the stress intensity factor K exceeds the critical value of Kc. The modeling of thermally induced acoustic emission memory effects is based on the assumption that every event of crack nucleation or crack growth caused by heating is accompanied by a single acoustic emission event. Parameters of the thermally induced acoustic emission memory effect produced by cyclic heating and cooling (with the temperature amplitude increasing from cycle to cycle) were calculated for several rock texture types (massive, banded, and disseminated). The study substantiates the adaptation of the proposed model to humidity interference with the thermally induced acoustic emission memory effect. The influence of humidity on the thermally induced acoustic emission memory effect in quasi-homogeneous and banded rocks is estimated. It is shown that such modeling allows the structure and texture of rocks to be taken into account and the influence of interference factors on the distinctness of the thermally induced acoustic emission memory effect to be estimated. The numerical modeling can be used to obtain information about the thermal impacts on rocks in the past and determine the degree of rock disturbance by means of non-destructive testing.

Keywords: degree of rock disturbance, non-destructive testing, thermally induced acoustic emission memory effects, structure and texture of rocks

Procedia PDF Downloads 252
9870 Decentralization and Participatory Approach in the Cultural Heritage Management in Local Thailand

Authors: Amorn Kritsanaphan

Abstract:

This paper illustrates the decentralization of cultural heritage management in local Thailand, a place similar to other middle- income developing countries characterized by rapid tourism-industrialization, weakness formal state institutions and procedures, and intensity use of the cultural heritage resources. The author conducted field research in local Thailand, principally using qualitative primary data gathering. These were combined with records reviews and content analysis of documents. The author also attended local public meetings, and social activities, and interacted casually with local residents and governments. Cultural heritage management has been supposed to improve through multi-stakeholder participation and decentralization. However, processes and outcomes are far from being straightforward and depend on a variety of contingencies and contexts involved. Multi-stakeholder and participatory approach in decentralization of the cultural heritage management in Thailand have pushed to the forefront and sharpened a number of existing problems. However, under the decentralization, the most significant contribution has been in creating real political space where various local stakeholders have become active, respond and address their concerns in various ways vis-à-vis cultural heritage problems. Improving cultural heritage sustainability and viability of local livelihoods through decentralization and participatory approach is by no means certain. However, the shift instead creates spaces potent with possibilities for a meaningful and constructive engagement between and among local state and non-state actors that can lead to synergies and positive outcomes.

Keywords: decentralization, participatory approach, cultural heritage management, multi-stakeholder approach

Procedia PDF Downloads 131
9869 ‘Nature Will Slow You Down for a Reason’: Virtual Elder-Led Support Services during COVID-19

Authors: Grandmother Roberta Oshkawbewisens, Elder Isabelle Meawasige, Lynne Groulx, Chloë Hamilton, Lee Allison Clark, Dana Hickey, Wansu Qiu, Jared Leedham, Nishanthini Mahendran, Cameron Maclaine

Abstract:

In March of 2020, the world suddenly shifted with the onset of the COVID-19 pandemic; in-person programs and services were unavailable and a scramble to shift to virtual service delivery began. The Native Women’s Association of Canada (NWAC) established virtual programming through the Resiliency Lodge model and connected with Indigenous women, girls, Two-Spirit, transgender, and gender-diverse people across Turtle Island and Inuit Nunangat through programs that provide a safe space to slow down and reflect on their lives, environment, and well-being. To continue to grow the virtual Resiliency Lodge model, NWAC needed to develop an understanding of three questions: how COVID-19 affects Elder-led support services, how Elder-led support services have adapted during the pandemic, and what Wise Practices need to be implemented to continue to develop, refine, and evaluate virtual Elder-led support services specifically for Indigenous women, girls, two-Spirit, transgender, and gender-diverse people. Through funding from the Canadian Institute of Health Research (CIHR), NWAC gained deeper insight into these questions and developed a series of key findings and recommendations that are outlined throughout this report. The goals of this project are to contribute to a more robust participatory analysis that reflects the complexities of Elder-led virtual cultural responses and the impacts of COVID-19 on Elder-led support services; develop culturally and contextually meaningful virtual protocols and wise practices for virtual Indigenous-led support; and develop an Evaluation Strategy to improve the capacity of the Resiliency Lodge model. Significant findings from the project include Resiliency Lodge programs, especially crafting and business sessions, have provided participants with a sense of community and contributed to healing and wellness; Elder-led support services need greater and more stable funding to offer more workshops to more Indigenous women, girls, Two-Spirit, transgender, and gender-diverse people; and Elder- and Indigenous-led programs play a significant role in healing and building a sense of purpose and belonging among Indigenous people. Ultimately, the findings and recommendations outlined in this research project help to guide future Elder-led virtual support services and emphasize the critical need to increase access to Elder-led programming for Indigenous women, girls, Two-Spirit, transgender, and gender-diverse people.

Keywords: indigenous women, traditional healing, virtual programs, covid-19

Procedia PDF Downloads 114
9868 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions

Authors: Joel Niklaus, Matthias Sturmer

Abstract:

The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.

Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling

Procedia PDF Downloads 134
9867 Mental Illness on Youtube: Exploring Identity Performance in the Virtual Space

Authors: P. Saee, Baiju Gopal

Abstract:

YouTube has seen a surge in the recent years in the number of creators opening up about their mental illness on the video-sharing platform. In documenting their mental health, YouTubers perform an identity of their mental illness in the online world. Identity performance is a theory under identity research that has been readily applied to illness narratives and internet studies. Furthermore, in India, suffering from mental illnesses is regarded with stigma, making the act of taking mental health from a personal to a public space on YouTube a phenomenon worth exploring. Thus, the aim of this paper is to analyse the mental illness narratives of Indian YouTubers for understanding its performance in the virtual world. For this purpose, thematic narrative analysis on the interviews of four Indian YouTubers was conducted. This data was synthesized with analysis of the videos the YouTubers had uploaded on their channel sharing about their mental illness. The narratives of the participants shed light on two significant presentations that they engage in: (a) the identity of a survivor/fighter and (b) the identity of a silent sufferer. Further, the participants used metaphors to describe their illness, thereby co-constructing a corresponding identity based on their particular metaphors. Lastly, the process of bringing mental illness from back stage to front stage on YouTube involves a shift in the audience, from being rejecting and invalidating in real life to being supportive and encouraging in the virtual space. Limitations and implications for future research were outlined.

Keywords: cyber-psychology, internet, media, mental health, mental illness, technology

Procedia PDF Downloads 158
9866 Numerical Tools for Designing Multilayer Viscoelastic Damping Devices

Authors: Mohammed Saleh Rezk, Reza Kashani

Abstract:

Auxiliary damping has gained popularity in recent years, especially in structures such as mid- and high-rise buildings. Distributed damping systems (typically viscous and viscoelastic) or reactive damping systems (such as tuned mass dampers) are the two types of damping choices for such structures. Distributed VE dampers are normally configured as braces or damping panels, which are engaged through relatively small movements between the structural members when the structure sways under wind or earthquake loading. In addition to being used as stand-alone dampers in distributed damping applications, VE dampers can also be incorporated into the suspension element of tuned mass dampers (TMDs). In this study, analytical and numerical tools for modeling and design of multilayer viscoelastic damping devices to be used in dampening the vibration of large structures are developed. Considering the limitations of analytical models for the synthesis and analysis of realistic, large, multilayer VE dampers, the emphasis of the study has been on numerical modeling using the finite element method. To verify the finite element models, a two-layer VE damper using ½ inch synthetic viscoelastic urethane polymer was built, tested, and the measured parameters were compared with the numerically predicted ones. The numerical model prediction and experimentally evaluated damping and stiffness of the test VE damper were in very good agreement. The effectiveness of VE dampers in adding auxiliary damping to larger structures is numerically demonstrated by chevron bracing one such damper numerically into the model of a massive frame subject to an abrupt lateral load. A comparison of the responses of the frame to the aforementioned load, without and with the VE damper, clearly shows the efficacy of the damper in lowering the extent of frame vibration.

Keywords: viscoelastic, damper, distributed damping, tuned mass damper

Procedia PDF Downloads 91
9865 Low-Cost Image Processing System for Evaluating Pavement Surface Distress

Authors: Keerti Kembhavi, M. R. Archana, V. Anjaneyappa

Abstract:

Most asphalt pavement condition evaluation use rating frameworks in which asphalt pavement distress is estimated by type, extent, and severity. Rating is carried out by the pavement condition rating (PCR), which is tedious and expensive. This paper presents the development of a low-cost technique for image pavement distress analysis that permits the identification of pothole and cracks. The paper explores the application of image processing tools for the detection of potholes and cracks. Longitudinal cracking and pothole are detected using Fuzzy-C- Means (FCM) and proceeded with the Spectral Theory algorithm. The framework comprises three phases, including image acquisition, processing, and extraction of features. A digital camera (Gopro) with the holder is used to capture pavement distress images on a moving vehicle. FCM classifier and Spectral Theory algorithms are used to compute features and classify the longitudinal cracking and pothole. The Matlab2016Ra Image preparing tool kit utilizes performance analysis to identify the viability of pavement distress on selected urban stretches of Bengaluru city, India. The outcomes of image evaluation with the utilization semi-computerized image handling framework represented the features of longitudinal crack and pothole with an accuracy of about 80%. Further, the detected images are validated with the actual dimensions, and it is seen that dimension variability is about 0.46. The linear regression model y=1.171x-0.155 is obtained using the existing and experimental / image processing area. The R2 correlation square obtained from the best fit line is 0.807, which is considered in the linear regression model to be ‘large positive linear association’.

Keywords: crack detection, pothole detection, spectral clustering, fuzzy-c-means

Procedia PDF Downloads 165
9864 Investigation of Delamination Process in Adhesively Bonded Hardwood Elements under Changing Environmental Conditions

Authors: M. M. Hassani, S. Ammann, F. K. Wittel, P. Niemz, H. J. Herrmann

Abstract:

Application of engineered wood, especially in the form of glued-laminated timbers has increased significantly. Recent progress in plywood made of high strength and high stiffness hardwoods, like European beech, gives designers in general more freedom by increased dimensional stability and load-bearing capacity. However, the strong hygric dependence of basically all mechanical properties renders many innovative ideas futile. The tendency of hardwood for higher moisture sorption and swelling coefficients lead to significant residual stresses in glued-laminated configurations, cross-laminated patterns in particular. These stress fields cause initiation and evolution of cracks in the bond-lines resulting in: interfacial de-bonding, loss of structural integrity, and reduction of load-carrying capacity. Subsequently, delamination of glued-laminated timbers made of hardwood elements can be considered as the dominant failure mechanism in such composite elements. In addition, long-term creep and mechano-sorption under changing environmental conditions lead to loss of stiffness and can amplify delamination growth over the lifetime of a structure even after decades. In this study we investigate the delamination process of adhesively bonded hardwood (European beech) elements subjected to changing climatic conditions. To gain further insight into the long-term performance of adhesively bonded elements during the design phase of new products, the development and verification of an authentic moisture-dependent constitutive model for various species is of great significance. Since up to now, a comprehensive moisture-dependent rheological model comprising all possibly emerging deformation mechanisms was missing, a 3D orthotropic elasto-plastic, visco-elastic, mechano-sorptive material model for wood, with all material constants being defined as a function of moisture content, was developed. Apart from the solid wood adherends, adhesive layer also plays a crucial role in the generation and distribution of the interfacial stresses. Adhesive substance can be treated as a continuum layer constructed from finite elements, represented as a homogeneous and isotropic material. To obtain a realistic assessment on the mechanical performance of the adhesive layer and a detailed look at the interfacial stress distributions, a generic constitutive model including all potentially activated deformation modes, namely elastic, plastic, and visco-elastic creep was developed. We focused our studies on the three most common adhesive systems for structural timber engineering: one-component polyurethane adhesive (PUR), melamine-urea-formaldehyde (MUF), and phenol-resorcinol-formaldehyde (PRF). The corresponding numerical integration approaches, with additive decomposition of the total strain are implemented within the ABAQUS FEM environment by means of user subroutine UMAT. To predict the true stress state, we perform a history dependent sequential moisture-stress analysis using the developed material models for both wood substrate and adhesive layer. Prediction of the delamination process is founded on the fracture mechanical properties of the adhesive bond-line, measured under different levels of moisture content and application of the cohesive interface elements. Finally, we compare the numerical predictions with the experimental observations of de-bonding in glued-laminated samples under changing environmental conditions.

Keywords: engineered wood, adhesive, material model, FEM analysis, fracture mechanics, delamination

Procedia PDF Downloads 414
9863 The Search for the Self in Psychotherapy: Findings from Relational Theory and Neuroanatomy

Authors: Harry G. Segal

Abstract:

The idea of the “self” has been essential ever since the early modern period in western culture, especially since the development of psychotherapy, but advances in neuroscience and cognitive theory challenge traditional notions of the self. More specifically, neuroanatomists have found no location of “the self” in the brain; instead, consciousness has been posited to be a rapid combination of perception, memory, anticipation of future events, and judgment. In this paper, a theoretical model is presented to address these neuroanatomical findings and to revise the historical understanding of “selfhood” in the practice of psychotherapy.

Keywords: the self, psychotherapy, the self and the brain

Procedia PDF Downloads 88
9862 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 227
9861 The Attentional Focus Impact on the Decision Making in Three-Game Situations in Tennis

Authors: Marina Tsetseli, Eleni Zetou, Maria Michalopoulou, Nikos Vernadakis

Abstract:

Game performance, besides the accuracy and the quality skills execution, depends heavily on where the athletes will focus their attention while performing a skill. The purpose of the present study was to examine and compare the effect of internal and external focus of attention instructions on the decision making in tennis at players 8-9 years old (M=8.4, SD=0.49). The participants (N=40) were divided into two groups and followed an intervention training program that lasted 4 weeks; first group (N=20) under internal focus of attention instructions and the second group (N=20) under external focus of attention instructions. Three measurements took place (pre-test, post-test, and retention test) in which the participants were video recorded while playing matches in real scoring conditions. GPAI (Game Performance Assessment Instrument) was used to evaluate decision making in three game situations; service, return of the service, baseline game. ANOVA repeated measures (2 groups x 3 measurements) revealed a significant interaction between groups and measurements. Specifically, the data analysis showed superiority of the group that was instructed to focus externally. The high scores of the external attention group were maintained at the same level at the third measurement as well, which indicates that the impact was concerning not only performance but also learning. Thus, cues that lead to an external focus of attention enhance the decision-making skill and therefore the game performance of the young tennis players.

Keywords: decision making, evaluation, focus of attention, game performance, tennis

Procedia PDF Downloads 339
9860 The Impact of Management Competency, Project Team, and Process Design to Corporate Performance through Implementing the Self-Development ERP

Authors: Zeplin Jiwa Husada Tarigan, Sautma Ronni Basana, Widjojo Suprapto

Abstract:

Manufacturing companies in East Java develop their own ERP system or alter the ERP system which is developed by other companies to suit their needs. To make their own system, the companies mostly assign several employees from various departments to create a project team, and the employees are from the departments that are going to utilize the ERP system as the integrated data. The project team decides the making of the ERP system from the preparation stage until the going live implementation process. In designing the business process, the top management is working together with the project team until the project is accomplished. The completion of the ERP projects depends on the project to be undertaken itself, the strategy chosen to complete the project, the work method selection, the measurement system to monitor the project, the evaluation system of the project, and, in the end, the declaration of 'going live' of the ERP project. There is an increase in the business performance for the companies that have implemented the information technology or ERP as they manage to integrate all management functions within their companies. To investigate, some questionnaires are distributed to 100 manufacturing companies, and 90 questionnaires are returned; however, there are only 46 companies that develop their own ERP system, so the response rate is 46%. The result of data analysis using PLS shows that the management competency brings impacts to the project team and the process design. The process design is adjusted to the real process in order to implement the ERP, but it does not bring direct impacts to the business performance. The implementation of ERP brings positive impacts to the company business performance.

Keywords: management competency, project team, process design, ERP implementation, business performance

Procedia PDF Downloads 199
9859 The Social Structuring of Mate Selection: Assortative Marriage Patterns in the Israeli Jewish Population

Authors: Naava Dihi, Jon Anson

Abstract:

Love, so it appears, is not socially blind. We show that partner selection is socially constrained, and the freedom to choose is limited by at least two major factors or capitals: on the one hand, material resources and education, locating the partners on a scale of personal achievement and economic independence. On the other, the partners' ascriptive belonging to particular ethnic, or origin, groups, differentiated by the groups' social prestige, as well as by their culture, history and even physical characteristics. However, the relative importance of achievement and ascriptive factors, as well as the overlap between them, varies from society to society, depending on the society's structure and the factors shaping it. Israeli social structure has been shaped by the waves of new immigrants who arrived over the years. The timing of their arrival, their patterns of physical settlement and their occupational inclusion or exclusion have together created a mosaic of social groups whose principal common feature has been the country of origin from which they arrived. The analysis of marriage patterns helps illuminate the social meanings of the groups and their borders. To the extent that ethnic group membership has meaning for individuals and influences their life choices, the ascriptive factor will gain in importance relative to the achievement factor in their choice of marriage partner. In this research, we examine Jewish Israeli marriage patterns by looking at the marriage choices of 5,041 women aged 15 to 49 who were single at the census in 1983, and who were married at the time of the 1995 census, 12 years later. The database for this study was a file linking respondents from the 1983 and the 1995 censuses. In both cases, 5 percent of household were randomly chosen, so that our sample includes about 4 percent of women in Israel in 1983. We present three basic analyses: (1) Who was still single in 1983, using personal and household data from the 1983 census (binomial model), (2) Who married between 1983 and a1995, using personal and household data from the 1983 census (binomial model), (3) What were the personal characteristics of the womens’ partners in 1995, using data from the 1995 census (loglinear model). We show (i) that material and cultural capital both operate to delay marriage and to increase the probability of remaining single; and (ii) while there is a clear association between ethnic group membership and education, endogamy and homogamy both operate as separate forces which constraint (but do not determine) the choice of marriage partner, and thus both serve to reproduce the current pattern of relationships, as well as identifying patterns of proximity and distance between the different groups.

Keywords: Israel, nuptiality, ascription, achievement

Procedia PDF Downloads 102
9858 Proposal of Non-Destructive Inspection Function Based on Internet of Things Technology Using Drone

Authors: Byoungjoon Yu, Jihwan Park, Sujung Sin, Junghyun Im, Minsoo Park, Sehwan Park, Seunghee Park

Abstract:

In this paper, we propose a technology to monitor the soundness of an Internet-based bridge using a non-conductive inspection function. There has been a collapse accident due to the aging of the bridge structure, and it is necessary to prepare for the deterioration of the bridge. The NDT/SHM system for maintenance of existing bridge structures requires a large number of inspection personnel and expensive inspection costs, and access of expensive and large equipment to measurement points is required. Because current drone inspection equipment can only be inspected through camera, it is difficult to inspect inside damage accurately, and the results of an internal damage evaluation are subjective, and it is difficult for non-specialists to recognize the evaluation results. Therefore, it is necessary to develop NDT/SHM techniques for maintenance of new-concept bridge structures that allow for free movement and real-time evaluation of measurement results. This work is financially supported by Korea Ministry of Land, Infrastructure, and Transport (MOLIT) as 'Smart City Master and Doctor Course Grant Program' and a grant (14SCIP-B088624-01) from Construction Technology Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

Keywords: Structural Health Monitoring, SHM, non-contact sensing, nondestructive testing, NDT, Internet of Things, autonomous self-driving drone

Procedia PDF Downloads 253
9857 The Hidden Role of Interest Rate Risks in Carry Trades

Authors: Jingwen Shi, Qi Wu

Abstract:

We study the role played interest rate risk in carry trade return in order to understand the forward premium puzzle. In this study, our goal is to investigate to what extent carry trade return is indeed due to compensation for risk taking and, more important, to reveal the nature of these risks. Using option data not only on exchange rates but also on interest rate swaps (swaptions), our first finding is that, besides the consensus currency risks, interest rate risks also contribute a non-negligible portion to the carry trade return. What strikes us is our second finding. We find that large downside risks of future exchange rate movements are, in fact, priced significantly in option market on interest rates. The role played by interest rate risk differs structurally from the currency risk. There is a unique premium associated with interest rate risk, though seemingly small in size, which compensates the tail risks, the left tail to be precise. On the technical front, our study relies on accurately retrieving implied distributions from currency options and interest rate swaptions simultaneously, especially the tail components of the two. For this purpose, our major modeling work is to build a new international asset pricing model where we use an orthogonal setup for pricing kernels and specify non-Gaussian dynamics in order to capture three sets of option skew accurately and consistently across currency options and interest rate swaptions, domestic and foreign, within one model. Our results open a door for studying forward premium anomaly through implied information from interest rate derivative market.

Keywords: carry trade, forward premium anomaly, FX option, interest rate swaption, implied volatility skew, uncovered interest rate parity

Procedia PDF Downloads 429
9856 Dissimilarity Measure for General Histogram Data and Its Application to Hierarchical Clustering

Authors: K. Umbleja, M. Ichino

Abstract:

Symbolic data mining has been developed to analyze data in very large datasets. It is also useful in cases when entry specific details should remain hidden. Symbolic data mining is quickly gaining popularity as datasets in need of analyzing are becoming ever larger. One type of such symbolic data is a histogram, which enables to save huge amounts of information into a single variable with high-level of granularity. Other types of symbolic data can also be described in histograms, therefore making histogram a very important and general symbolic data type - a method developed for histograms - can also be applied to other types of symbolic data. Due to its complex structure, analyzing histograms is complicated. This paper proposes a method, which allows to compare two histogram-valued variables and therefore find a dissimilarity between two histograms. Proposed method uses the Ichino-Yaguchi dissimilarity measure for mixed feature-type data analysis as a base and develops a dissimilarity measure specifically for histogram data, which allows to compare histograms with different number of bins and bin widths (so called general histogram). Proposed dissimilarity measure is then used as a measure for clustering. Furthermore, linkage method based on weighted averages is proposed with the concept of cluster compactness to measure the quality of clustering. The method is then validated with application on real datasets. As a result, the proposed dissimilarity measure is found producing adequate and comparable results with general histograms without the loss of detail or need to transform the data.

Keywords: dissimilarity measure, hierarchical clustering, histograms, symbolic data analysis

Procedia PDF Downloads 149
9855 Poly (Diphenylamine-4-Sulfonic Acid) Modified Glassy Carbon Electrode for Voltammetric Determination of Gallic Acid in Honey and Peanut Samples

Authors: Zelalem Bitew, Adane Kassa, Beyene Misgan

Abstract:

In this study, a sensitive and selective voltammetric method based on poly(diphenylamine-4-sulfonic acid) modified glassy carbon electrode (poly(DPASA)/GCE) was developed for determination of gallic acid. Appearance of an irreversible oxidative peak at both bare GCE and poly(DPASA)/GCE for gallic acid with about three folds current enhancement and much reduced potential at poly(DPASA)/GCE showed catalytic property of the modifier towards oxidation of gallic acid. Under optimized conditions, Adsorptive stripping square wave voltammetric peak current response of the poly(DPASA)/GCE showed linear dependence with gallic acid concentration in the range 5.00 × 10-7 − 3.00 × 10-4 mol L-1 with limit of detection of 4.35 × 10-9. Spike recovery results between 94.62-99.63, 95.00-99.80 and 97.25-103.20% of gallic acid in honey, raw peanut, and commercial peanut butter samples respectively, interference recovery results with less than 4.11% error in the presence of uric acid and ascorbic acid, lower LOD and relatively wider dynamic range than most of the previously reported methods validated the potential applicability of the method based on poly(DPASA)/GCE for determination of gallic acid real samples including in honey and peanut samples.

Keywords: gallic acid, diphenyl amine sulfonic acid, adsorptive anodic striping square wave voltammetry, honey, peanut

Procedia PDF Downloads 56
9854 Polish Authorities Towards Refugee Crises

Authors: Klaudia Gołębiowska

Abstract:

This article analyzes the actions of Poland's ruling party facing two refugee crises. These crises emerged almost one after the other within a few months. The first concerned irregular migrants from various countries, including the Middle East, seeking to cross the Polish border from the territory of Belarus. The second was caused by Russia's full-scale invasion of Ukraine. I aim to show the evolution of the discourse and law towards immigrants and refugees by the party Prawo i Sprawiedliwość (PiS, ang. Law and Justice), which has been in power in Poland since 2015. The authorities, in power since 2015, have radically changed its anti-immigrant discourse towards the exodus of civilians from Ukraine. Research questions are the following: What were the roots of the refugee crises in Poland in 2021 and 2022? What legal or illegal measures were taken in Poland to deal with the refugee crises? The methods of qualitative source analysis and process tracing. From the first days of the war in Ukraine, not only was aid organised for Ukrainians, but they were also given access to public services and education. All refugees were granted temporary international protection. At the same time, the basic physiological needs of those on the Polish-Belarusian border were ignored. Moreover, illegal pushbacks were used against those coming mainly from the Middle East, pushing them into the territory of Belarus, where they were often subjected to torture and inhumane treatment. The Polish government justified such treatment on the grounds that these people were part of a 'hybrid war' waged by Russia and Belarus using migrants. Only Ukrainians were treated as 'real' refugees in the analyzed crises at the Polish borders.

Keywords: refugee, irregular migrants, hybrid war, migrants

Procedia PDF Downloads 49
9853 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 150
9852 Education for Sustainability: Implementing a Place-Based Watershed Science Course for High School Students

Authors: Dina L. DiSantis

Abstract:

Development and implementation of a place-based watershed science course for high school students will prove to be a valuable experience for both student and teacher. By having students study and assess the watershed dynamics of a local stream, they will better understand how human activities affect this valuable resource. It is important that students gain tangible skills that will help them to have an understanding of water quality analysis and the importance of preserving our Earth's water systems. Having students participate in real world practices is the optimal learning environment and can offer students a genuine learning experience, by cultivating a knowledge of place, while promoting education for sustainability. Additionally, developing a watershed science course for high school students will give them a hands-on approach to studying science; which is both beneficial and more satisfying to students. When students conduct their own research, collect and analyze data, they will be intimately involved in addressing water quality issues and solving critical water quality problems. By providing students with activities that take place outside the confines of the indoor classroom, you give them the opportunity to gain an appreciation of the natural world. Placed-based learning provides students with problem-solving skills in everyday situations while enhancing skills of inquiry. An overview of a place-based watershed science course and its impact on student learning will be presented.

Keywords: education for sustainability, place-based learning, watershed science, water quality

Procedia PDF Downloads 138
9851 Mathematical Modeling to Reach Stability Condition within Rosetta River Mouth, Egypt

Authors: Ali Masria , Abdelazim Negm, Moheb Iskander, Oliver C. Saavedra

Abstract:

Estuaries play an important role in exchanging water and providing a navigational pathway for ships. These zones are very sensitive and vulnerable to any interventions in coastal dynamics. Almost major of these inlets experience coastal problems such as severe erosion, and accretion. Rosetta promontory, Egypt is an example of this environment. It suffers from many coastal problems as erosion problem along the coastline and siltation problem inside the inlet. It is due to lack of water and sediment resources as a side effect of constructing the Aswan High dam. The shoaling of the inlet leads to hindering the navigation process of fishing boats, negative impacts to estuarine and salt marsh habitat and decrease the efficiency of the cross section to transfer the flow during emergencies to the sea. This paper aims to reach a new condition of stability of Rosetta Promontory by using coastal measures to control the sediment entering, and causes shoaling inside the inlet. These coastal measures include modifying the inlet cross section by using centered jetties, eliminate the coastal dynamic in the entrance using boundary jetties. This target is achieved by using a hydrodynamic model Coastal Modeling System (CMS). Extensive field data collection (hydrographic surveys, wave data, tide data, and bed morphology) is used to build and calibrate the model. About 20 scenarios were tested to reach a suitable solution that mitigate the coastal problems at the inlet. The results show that 360 m jetty in the eastern bank with system of sand bypass from the leeside of the jetty can stabilize the estuary.

Keywords: Rosetta promontory, erosion, sedimentation, inlet stability

Procedia PDF Downloads 571
9850 Space Debris Mitigation: Solutions from the Dark Skies of the Remote Australian Outback Using a Proposed Network of Mobile Astronomical Observatories

Authors: Muhammad Akbar Hussain, Muhammad Mehdi Hussain, Waqar Haider

Abstract:

There are tens of thousands of undetected and uncatalogued pieces of space debris in the Low Earth Orbit (LEO). They are not only difficult to be detected and tracked, their sheer number puts active satellites and humans in orbit around Earth into danger. With the entry of more governments and private companies into harnessing the Earth’s orbit for communication, research and military purposes, there is an ever-increasing need for not only the detection and cataloguing of these pieces of space debris, it is time to take measures to take them out and clean up the space around Earth. Current optical and radar-based Space Situational Awareness initiatives are useful mostly in detecting and cataloguing larger pieces of debris mainly for avoidance measures. Smaller than 10 cm pieces are in a relatively dark zone, yet these are deadly and capable of destroying satellites and human missions. A network of mobile observatories, connected to each other in real time and working in unison as a single instrument, may be able to detect small pieces of debris and achieve effective triangulation to help create a comprehensive database of their trajectories and parameters to the highest level of precision. This data may enable ground-based laser systems to help deorbit individual debris. Such a network of observatories can join current efforts in detection and removal of space debris in Earth’s orbit.

Keywords: space debris, low earth orbit, mobile observatories, triangulation, seamless operability

Procedia PDF Downloads 145
9849 Modeling and Characterization of Organic LED

Authors: Bouanati Sidi Mohammed, N. E. Chabane Sari, Mostefa Kara Selma

Abstract:

It is well-known that Organic light emitting diodes (OLEDs) are attracting great interest in the display technology industry due to their many advantages, such as low price of manufacturing, large-area of electroluminescent display, various colors of emission included white light. Recently, there has been much progress in understanding the device physics of OLEDs and their basic operating principles. In OLEDs, Light emitting is the result of the recombination of electron and hole in light emitting layer, which are injected from cathode and anode. For improve luminescence efficiency, it is needed that hole and electron pairs exist affluently and equally and recombine swiftly in the emitting layer. The aim of this paper is to modeling polymer LED and OLED made with small molecules for studying the electrical and optical characteristics. The first simulation structures used in this paper is a mono layer device; typically consisting of the poly (2-methoxy-5(2’-ethyl) hexoxy-phenylenevinylene) (MEH-PPV) polymer sandwiched between an anode usually an indium tin oxide (ITO) substrate, and a cathode, such as Al. In the second structure we replace MEH-PPV by tris (8-hydroxyquinolinato) aluminum (Alq3). We choose MEH-PPV because of it's solubility in common organic solvents, in conjunction with a low operating voltage for light emission and relatively high conversion efficiency and Alq3 because it is one of the most important host materials used in OLEDs. In this simulation, the Poole-Frenkel- like mobility model and the Langevin bimolecular recombination model have been used as the transport and recombination mechanism. These models are enabled in ATLAS -SILVACO software. The influence of doping and thickness on I(V) characteristics and luminescence, are reported.

Keywords: organic light emitting diode, polymer lignt emitting diode, organic materials, hexoxy-phenylenevinylene

Procedia PDF Downloads 540
9848 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue

Authors: Rachel Y. Zhang, Christopher K. Anderson

Abstract:

A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.

Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine

Procedia PDF Downloads 117
9847 Spatiotemporal Modeling of Under-Five Mortality and Associated Risk Factors in Ethiopia

Authors: Melkamu A. Zeru, Aweke A. Mitiku, Endashaw Amuka

Abstract:

Background: Under-five mortality is the likelihood that a baby will pass away before turning exactly 5 years old, represented as a percentage per 1,000 live births. Exploring the spatial distribution and identifying the temporal pattern is important to reducing under-five child mortality globally, including in Ethiopia. Thus, this study aimed to identify the risk factors of under-five mortality and the spatiotemporal variation in Ethiopian administrative zones. Method: This study used the 2000-2016 Ethiopian Demographic and Health Survey (EDHS) data, which were collected using a two-stage sampling method. A total of 43,029 (10,873 in 2000, 9,861 in 2005, 11,654 in 2011, and 10,641 in 2016) weighted sample under-five child mortality was used. The space-time dynamic model was employed to account for spatial and time effects in 65 administrative zones in Ethiopia. Results: From the result of a general nesting spatial-temporal dynamic model, there was a significant space-time interaction effect [γ = -0.1444, 95 % CI (-0.6680, -0.1355)] for under-five mortality. The increase in the percentages of mothers illiteracy [𝛽 = 0.4501, 95% CI (0.2442, 0.6559)], not vaccinated[𝛽= 0.7681, 95% CI (0.5683, 0.9678)], unimproved water[𝛽= 0.5801, CI (0.3793, 0.7808)] were increased death rates for under five children while increased percentage of contraceptive use [𝛽= -0.6609, 95% CI (-0.8636, -0.4582)] and ANC visit > 4 times [𝛽= -0.1585, 95% CI(-0.1812, -0.1357)] were contributed to the decreased under-five mortality rate at the zone in Ethiopia. Conclusions: Even though the mortality rate for children under five has decreased over time, still there is still higher in different zones of Ethiopia. There exists spatial and temporal variation in under-five mortality among zones. Therefore, it is very important to consider spatial neighbourhoods and temporal context when aiming to avoid under-five mortality.

Keywords: under-five children mortality, space-time dynamic, spatiotemporal, Ethiopia

Procedia PDF Downloads 16
9846 Tradition and Modernity in Translation Studies: The Case of Undergraduate and Graduate Programs at Unicamp, Brazil

Authors: Erica Lima

Abstract:

In Brazil, considering the (little) age of translation studies, it can be argued that the University of Campinas is traditionally an important place for graduate studies in translation. The story is told from the accreditation for the Masters, in 1987, and the Doctoral program, in 1993, within the Graduate Program in Applied Linguistics. Since the beginning, the program boasted cutting-edge research, with theoretical reflections on various aspects, and with different methodological trends. However, on the one hand, the graduate studies development was continuously growing, but on the other, it is not what was observed in the undergraduate degree program. Currently, there are only a few disciplines of Translation Theory and Practice, which does not seem to respond to student aspirations. The objective of this paper is to present the characteristics of the university’s graduate program as something profitable, considering the concern in relating the research to the historical moment in which we are living, with research conducted in a socially compromised environment and committed to the impact that it will cause ethically and socially, as well as to question the undergraduate program paths. The objective is also to discuss and propose changes, considering the limited scope currently achieved. In light of the information age, in which we have an avalanche of information, we believe that the training of translators in the undergraduate degree should be reviewed, with the goal of retracing current paths and following others that are consistent with our historical period, marked by virtual and real, by the shuffling of borders and languages, the need for new language policies, greater inclusion, and more acceptance of others. We conclude that we need new proposals for the development of the translator in an undergraduate program, and also present suggestions to be implemented in the graduate program.

Keywords: graduate Brazilian program, undergraduate Brazilian program, translator’s education, Unicamp

Procedia PDF Downloads 317
9845 Impact of Social Distancing on the Correlation Between Adults’ Participation in Learning and Acceptance of Technology

Authors: Liu Yi Hui

Abstract:

The COVID-19 pandemic in 2020 has globally affected all aspects of life, with social distancing and quarantine orders causing turmoil and learning in community colleges being temporarily paused. In fact, this is the first time that adult education has faced such a severe challenge. It forces researchers to reflect on the impact of pandemics on adult education and ways to respond. Distance learning appears to be one of the pedagogical tools capable of dealing with interpersonal isolation and social distancing caused by the pandemic. This research aims to examine whether the impact of social distancing during COVID-19 will lead to increased acceptance of technology and, subsequently, an increase in adults ’ willingness to participate in distance learning. The hypothesis that social distancing and the desire to participate in distance learning affects learners’ tendency to accept technology is investigated. Teachers ’ participation in distance education and acceptance of technology are used as adjustment variables with the relationship to “social distancing,” “participation in distance learning,” and “acceptance of technology” of learners. A questionnaire survey was conducted over a period of twelve months for teachers and learners at all community colleges in Taiwan who enrolled in a basic unit course. Community colleges were separated using multi-stage cluster sampling, with their locations being metropolitan, non-urban, south, and east as criteria. Using the G*power software, 660 samples were selected and analyzed. The results show that through appropriate pedagogical strategies or teachers ’ own acceptance of technology, adult learners’ willingness to participate in distance learning could be influenced. A diverse model of participation can be developed, improving adult education institutions’ ability to plan curricula to be flexible to avoid the risk associated with epidemic diseases.

Keywords: social distancing, adult learning, community colleges, technology acceptance model

Procedia PDF Downloads 124
9844 The Scenario Analysis of Shale Gas Development in China by Applying Natural Gas Pipeline Optimization Model

Authors: Meng Xu, Alexis K. H. Lau, Ming Xu, Bill Barron, Narges Shahraki

Abstract:

As an emerging unconventional energy, shale gas has been an economically viable step towards a cleaner energy future in U.S. China also has shale resources that are estimated to be potentially the largest in the world. In addition, China has enormous unmet for a clean alternative to substitute coal. Nonetheless, the geological complexity of China’s shale basins and issues of water scarcity potentially impose serious constraints on shale gas development in China. Further, even if China could replicate to a significant degree the U.S. shale gas boom, China faces the problem of transporting the gas efficiently overland with its limited pipeline network throughput capacity and coverage. The aim of this study is to identify the potential bottlenecks in China’s gas transmission network, as well as to examine the shale gas development affecting particular supply locations and demand centers. We examine this through application of three scenarios with projecting domestic shale gas supply by 2020: optimistic, medium and conservative shale gas supply, taking references from the International Energy Agency’s (IEA’s) projections and China’s shale gas development plans. Separately we project the gas demand at provincial level, since shale gas will have more significant impact regionally than nationally. To quantitatively assess each shale gas development scenario, we formulated a gas pipeline optimization model. We used ArcGIS to generate the connectivity parameters and pipeline segment length. Other parameters are collected from provincial “twelfth-five year” plans and “China Oil and Gas Pipeline Atlas”. The multi-objective optimization model uses GAMs and Matlab. It aims to minimize the demands that are unable to be met, while simultaneously seeking to minimize total gas supply and transmission costs. The results indicate that, even if the primary objective is to meet the projected gas demand rather than cost minimization, there’s a shortfall of 9% in meeting total demand under the medium scenario. Comparing the results between the optimistic and medium supply of shale gas scenarios, almost half of the shale gas produced in Sichuan province and Chongqing won’t be able to be transmitted out by pipeline. On the demand side, the Henan province and Shanghai gas demand gap could be filled as much as 82% and 39% respectively, with increased shale gas supply. To conclude, the pipeline network in China is currently not sufficient in meeting the projected natural gas demand in 2020 under medium and optimistic scenarios, indicating the need for substantial pipeline capacity expansion for some of the existing network, and the importance of constructing new pipelines from particular supply to demand sites. If the pipeline constraint is overcame, Beijing, Shanghai, Jiangsu and Henan’s gas demand gap could potentially be filled, and China could thereby reduce almost 25% its dependency on LNG imports under the optimistic scenario.

Keywords: energy policy, energy systematic analysis, scenario analysis, shale gas in China

Procedia PDF Downloads 266
9843 Oracle JDE Enterprise One ERP Implementation: A Case Study

Authors: Abhimanyu Pati, Krishna Kumar Veluri

Abstract:

The paper intends to bring out a real life experience encountered during actual implementation of a large scale Tier-1 Enterprise Resource Planning (ERP) system in a multi-location, discrete manufacturing organization in India, involved in manufacturing of auto components and aggregates. The business complexities, prior to the implementation of ERP, include multi-product with hierarchical product structures, geographically distributed multiple plant locations with disparate business practices, lack of inter-plant broadband connectivity, existence of disparate legacy applications for different business functions, and non-standardized codifications of products, machines, employees, and accounts apart from others. On the other hand, the manufacturing environment consisted of processes like Assemble-to-Order (ATO), Make-to-Stock (MTS), and Engineer-to-Order (ETO) with a mix of discrete and process operations. The paper has highlighted various business plan areas and concerns, prior to the implementation, with specific focus on strategic issues and objectives. Subsequently, it has dealt with the complete process of ERP implementation, starting from strategic planning, project planning, resource mobilization, and finally, the program execution. The step-by-step process provides a very good learning opportunity about the implementation methodology. At the end, various organizational challenges and lessons emerged, which will act as guidelines and checklist for organizations to successfully align and implement ERP and achieve their business objectives.

Keywords: ERP, ATO, MTS, ETO, discrete manufacturing, strategic planning

Procedia PDF Downloads 228