Search results for: order picking process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25615

Search results for: order picking process

18505 Analyzing the Effectiveness of Communication Practices and Processes within Project-Based Firms

Authors: Paul Saah, Charles Mbohwa, Nelson Sizwe Madonsela

Abstract:

The capacity to deliver projects on schedule, within budget, and to the pleasure of the client depends on effective communication, which is the lifeblood of project-based businesses. In order to pinpoint areas for development and shed light on the crucial role that communication plays in project success, the aim of this study is to evaluate the efficacy of communication practises and processes inside project-based organisations. In order to analyse concepts and get a greater grasp of their theoretical basis, this study's methodology combines a careful review of the relevant literature with a conceptual analysis of the subject. Data from a varied sample of project-based businesses spanning all industries and sizes were collected via document analysis. The relationship between communication practises, and processes were investigated in connection to key performance measures such as project outcomes, client satisfaction, and team dynamics. According to the study's findings, project-based businesses that adopt effective communication practises, and procedures experience a reduction in unfavourable experiences, stronger integration, and coordination, clarity of purpose, and practises that can hasten problem resolution. However, failing to adopt effective communication practises and procedures in project-based company result in counter issues, including project derailment from the schedule, failure to meet goals, inefficient use of existing resources, and failure to meet organisational goals. Therefore, optimising their communication practises, and procedures are crucial for sustainable growth and competitive advantage as project-based enterprises continue to play a crucial part in today's dynamic business scene.

Keywords: effective communication, project-based firms, communication practices, project success, communication strategies

Procedia PDF Downloads 58
18504 Comparative Study to Evaluate the Efficacy of Control Criterion in Determining Consolidation Scope in the Public Sector

Authors: Batool Zarei

Abstract:

This study aims to answer this question whether control criterion with two elements of power and benefit which is introduced as 'control criterion of consolidation scope' in national and international standards of accounting in public sector (and also private sector) is efficient enough or not. The methodology of this study is comparative and the results of this research are significantly generalizable, due to the given importance to the sample of countries which were studied. Findings of this study states that in spite of pervasive use of control criterion (including 2 elements of power and benefit), criteria for determining the existence of control in public sector accounting standards, are not efficient enough to determine the consolidation scope of whole of government financial statements in a way that meet decision making and accountability needs of managers, policy makers and supervisors; specially parliament. Therefore, the researcher believes that for determining consolidation scope in public sector, in addition to economic view, it is better to pay attention to budgetary, legal and statistical concepts and also to practical and financial risk and define indicators for proving the existence of control (power and benefit) which include accountability relationships (budgetary relation, legal form and nature of activity). these findings also reveals the necessity of passing a comprehensive public financial management (PFM) legislation in order to redefine the characteristics of public sector entities and whole of government financial statements scope and review Statistics organizations and central banks duties for preparing government financial statistics and national accounts in order to achieve sustainable development and resilient economy goals.

Keywords: control, consolidation scope, public sector accounting, government financial statistics, resilient economy

Procedia PDF Downloads 254
18503 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 115
18502 Developing a Decision-Making Tool for Prioritizing Green Building Initiatives

Authors: Tayyab Ahmad, Gerard Healey

Abstract:

Sustainability in built environment sector is subject to many development constraints. Building projects are developed under different requirements of deliverables which makes each project unique. For an owner organization, i.e., a higher-education institution, involved in a significant building stock, it is important to prioritize some of the sustainability initiatives over the others in order to align the sustainable building development with organizational goals. The point-based green building rating tools i.e. Green Star, LEED, BREEAM are becoming increasingly popular and are well-acknowledged worldwide for verifying a sustainable development. It is imperative to synthesize a multi-criteria decision-making tool that can capitalize on the point-based methodology of rating systems while customizing the sustainable development of building projects according to the individual requirements and constraints of the client organization. A multi-criteria decision-making tool for the University of Melbourne is developed that builds on the action-learning and experience of implementing Green Buildings at the University of Melbourne. The tool evaluates the different sustainable building initiatives based on the framework of Green Star rating tool of Green Building Council of Australia. For each different sustainability initiative the decision-making tool makes an assessment based on at least five performance criteria including the ease with which a sustainability initiative can be achieved and the potential of a sustainability initiative to enhance project objectives, reduce life-cycle costs, enhance University’s reputation, and increase the confidence in quality construction. The use of a weighted aggregation mathematical model in the proposed tool can have a considerable role in the decision-making process of a Green Building project by indexing the Green Building initiatives in terms of organizational priorities. The index value of each initiative will be based on its alignment with some of the key performance criteria. The usefulness of the decision-making tool is validated by conducting structured interviews with some of the key stakeholders involved in the development of sustainable building projects at the University of Melbourne. The proposed tool is realized to help a client organization in deciding that within limited resources which sustainability initiatives and practices are more important to be pursued than others.

Keywords: higher education institution, multi-criteria decision-making tool, organizational values, prioritizing sustainability initiatives, weighted aggregation model

Procedia PDF Downloads 223
18501 Usage of Biosorbent Material for the Removal of Nitrate from Wastewater

Authors: M. Abouleish, R. Umer, Z. Sara

Abstract:

Nitrate can cause serious environmental and human health problems. Effluent from different industries and excessive use of fertilizers have increased the level of nitrate in ground and surface water. Nitrate can convert to nitrite in the body, and as a result, can lead to Methemoglobinemia and cancer. Therefore, different organizations have set standard limits for nitrate and nitrite. The United States Environmental Protection Agency (USEPA) has set a Maximum Contaminant Level Goal (MCLG) of 10 mg N/L for nitrate and 1 mg N/L for nitrite. The removal of nitrate from water and wastewater is very important to ensure the availability of clean water. Different plant materials such as banana peel, rice hull, coconut and bamboo shells, have been studied as biosorbents for the removal of nitrates from water. The use of abundantly existing plant material as an adsorbent material and the lack of energy requirement for the adsorption process makes biosorption a sustainable approach. Therefore, in this research, the fruit of the plant was investigated for its ability to act as a biosorbent to remove the nitrate from wastewater. The effect of pH on nitrate removal was studied using both the raw and chemically activated fruit (adsorbent). Results demonstrated that the adsorbent needs to be chemically activated before usage to remove the nitrate from wastewater. pH did not have a significant effect on the adsorption process, with maximum adsorption of nitrate occurring at pH 4. SEM/EDX results demonstrated that there is no change in the surface of the adsorbent as a result of the chemical activation. Chemical activation of the adsorbent using NaOH increased the removal of nitrate by 6%; therefore, various methods of activation of the adsorbent will be investigated to increase the removal of nitrate.

Keywords: biosorption, nitrates, plant material, water, and wastewater treatment

Procedia PDF Downloads 144
18500 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)

Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude

Abstract:

Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.

Keywords: Coconut, Melon, Optimization, Processing

Procedia PDF Downloads 434
18499 Investigation of Mechanical Properties and Positron Annihilation Lifetime Spectroscopy of Acrylonitrile Butadiene Styrene/Polycarbonate Blends

Authors: Ayman M. M. Abdelhaleem, Mustafa Gamal Sadek, Kamal Reyad, Montasser M. Dewidar

Abstract:

The main objective of this research is to study the effect of adding polycarbonate (PC) to pure Acrylonitrile Butadiene Styrene (ABS) using the injection moulding process. The PC was mixed mechanically with ABS in 10%, 20%, 30%, 40%, and 50% by weight. The mechanical properties of pure ABS reinforced with PC were investigated using tensile, impact, hardness, and wear tests. The results showed that, by adding 10%, 20%, 30%, 40%, and 50% wt. of PC to the pure ABS, the ultimate tensile strength increased from 55 N/mm2 for neat ABS to 57 N/mm2 (i.e. 3.63%), 60 N/mm2 (i.e. 9.09%), 63 N/mm2 (i.e. 14.54%), 66 N/mm2 (i.e. 20%), 69 N/mm2 (i.e. 25.45%) respectively. Test results also revealed nearly 5.72% improvement in young's modulus by adding 10% of PC to ABS, 16.74% improvement by adding 20%, 23.34% improvement by adding 30%, 27.75% improvement by adding 40%, and no other increase in case of 50%. The impact test results showed that with the increase of the PC content, first, the impact strength decreased and then increased gradually. The impact strength decreased rapidly when the content of PC was 0% to 10% range. As well as, in the case of 20%, 30%, 40%, and 50% PC, the impact strength is increased. The hardness test results, using the Shore D tester, showed that, as the PC particles contents increased, the hardness increased from 76 for the ABS to 80 for 10% PC, and decreased to 79 for 20% PC, and then increased to 80 in case of 30%, 40%, and 50% PC. Wear test results showed that PC improves the wear resistance of ABS/PC blends. Positron annihilation lifetime spectroscopy showed that with an increase of PC in ABS/PC blends, a slight decrease in free volume size and an increase in the tensile strength due to good adhesion between PC and ABS matrix, which acted as an advantage in the polymer matrix.

Keywords: ABS, PC, injection molding process, mechanical properties, lifetime spectroscopy

Procedia PDF Downloads 67
18498 Translation Skills and Language Acquisition

Authors: Frieda Amitai

Abstract:

The field of Translation Studies includes both descriptive and applied aspects, one of which is developing curricula. Within this topic there are theories dealing with curricula aimed at translator training, and theories meant to explore teaching translation as means through which awareness to language is developed in order to enhance language knowledge. An example of the latter is a unique study program in Israeli high schools – Teaching Translation Skills Program (TTSP). This study program has been taught in Israel for more than two decades and is aimed at raising students' meta-linguistic awareness as well as their language proficiency in both source language and target language in order to enable them become better language learners. The objective of the current research was to examine whether the goals of this program are achieved – increase in students' metalinguistic awareness and language proficiency. A follow-up case study was aimed at examining the level of proficiency which would develop most by this way of teaching English. The study was conducted in two stages – before and after participating in the program. 400 subjects took part in the first stage, and 100 took part in the second. In both parts of the study, participants were given the same five tasks in both Hebrew and English in addition to a questionnaire, in which they were asked about their own knowledge of Hebrew and in comparison to that of their peers. Their teachers were asked about the success of the program and about the methodology they use in class. Findings show significant change in the level of meta-linguistic awareness of the students as well as their language proficiency. A comparison between their answers before and after the program shows that their meta-linguistic awareness increased, as did their ability to recognize linguistic mistakes. These findings serve as strong evidence for the positive effect such study program has on the development of meta-linguistic awareness and linguistic knowledge. The follow-up case study tests the change among weaker language learners.

Keywords: comparison, metalinguistic awareness, language learning, translation skills

Procedia PDF Downloads 349
18497 Anatomy of the Challenges, Problems and Prospects of Polytechnic Administration in North-Central Nigeria

Authors: A. O. Osabo

Abstract:

Polytechnic education is often described as the only sustainable academic institution that can propel massive industrial and technological growth and development in all sectors of the Nigerian economy. Because of its emphasis on science and technology, practical demonstration of skills and pivotal role in the training of low-and-high-cadre technologists and technocrats to man critical sectors of the economy, the administration of polytechnics needs to be run according to global best standards and practices in order to achieve their goals and objectives. Besides, the polytechnics need to be headed by seasoned and academically sound professionals to pursue the goals and objectives of the schools as centres of technology, learning and academic excellence. Over the years, however, polytechnics in Nigeria have suffered a wide myriad of administrative problems and challenges which have prevented them from achieving their basic goals and objectives. Apart from regulatory problems and challenges, some heads of polytechnics do not demonstrate leadership and management skills in bringing the desired innovations in the management of the polytechnics under them. These have resulted, in most cases, to the polytechnics not performing optimally in its mandate. This paper examines the administrative problems, challenges and prospects of polytechnics education in north-central Nigeria. Using a total of 97 questionnaires consisting of semi-structured interviews of yes-or-no questions shared among staff and students of the selected polytechnics and a descriptive statistical method of analysis, the study found that the inability of the polytechnics to meet their goals and objectives is caused by administrative and organizational problems and challenges, bordering on funding, accreditation, manpower, corruption and maladministration, among others. The paper thus suggests that the leadership of the polytechnics must rise up to the demands of the time in order to deal with the administrative problems and challenges affecting them and fulfill the goals and objectives for which the schools were established.

Keywords: education, administration, polytechnic, accreditation, Nigerian

Procedia PDF Downloads 258
18496 Development of Electronic Services in Georgia: Analysis of Current Situation

Authors: Dato Surmanidze, Dato Antadze, Tornike Partenadze

Abstract:

Public online services in Georgia are concentrated on main target segments: public administration, business, population, non-governmental and other interested organizations. Therefore, the strategy of digital Georgia is focused on providing G2C, G2B/B2G, G2NGO and G2G services. In G2C framework sophisticated and high-technological online services have been developed in order to provide passports, identity cards, documentations concerning residence and civil acts (birth, marriage, divorce, child adoption, change of name and surname, death, etc) as well as other services. Websites like my.gov.ge and sda.gov.ge have distance services like electronic application, processing and decision making. In line with international standards automatic services like electronic tenders, product catalogues, invoices and payment have been developed. This creates better investment climate for foreign companies in Georgia in the framework of G2B politics. The website mybusiness.gov.ge creates better conditions for local business. Among electronic services is e-NRMS (electronic system for national resource management) which was introduced by the Ministry of Finance of Georgia. The system was created in order to ensure management of national resources by state and business organizations. It is integrated with bank services and provides G2C, G2B and B2G representatives with electronic services. Also a portal meteo.gov.ge was created which gives electronic services concerning air, geological, environmental and pollution issues. Also worknet.gov.ge should be mentioned which is an electronic hub of information management for employers and employees. The information portal of labor market will facilitate receipt of information, its analysis and delivery to interested people like employers and employees. However, nowadays it’s been two years that only employees portal is activated. Therefore, awareness about the portal, its competitiveness and success is undermined.

Keywords: electronic services, public administration, information technology, information society

Procedia PDF Downloads 264
18495 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada

Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman

Abstract:

Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.

Keywords: HAND, DTM, rapid floodplain, simplified conceptual models

Procedia PDF Downloads 142
18494 A Corpus-Based Contrastive Analysis of Directive Speech Act Verbs in English and Chinese Legal Texts

Authors: Wujian Han

Abstract:

In the process of human interaction and communication, speech act verbs are considered to be the most active component and the main means for information transmission, and are also taken as an indication of the structure of linguistic behavior. The theoretical value and practical significance of such everyday built-in metalanguage have long been recognized. This paper, which is part of a bigger study, is aimed to provide useful insights for a more precise and systematic application to speech act verbs translation between English and Chinese, especially with regard to the degree to which generic integrity is maintained in the practice of translation of legal documents. In this study, the corpus, i.e. Chinese legal texts and their English translations, English legal texts, ordinary Chinese texts, and ordinary English texts, serve as a testing ground for examining contrastively the usage of English and Chinese directive speech act verbs in legal genre. The scope of this paper is relatively wide and essentially covers all directive speech act verbs which are used in ordinary English and Chinese, such as order, command, request, prohibit, threat, advice, warn and permit. The researcher, by combining the corpus methodology with a contrastive perspective, explored a range of characteristics of English and Chinese directive speech act verbs including their semantic, syntactic and pragmatic features, and then contrasted them in a structured way. It has been found that there are similarities between English and Chinese directive speech act verbs in legal genre, such as similar semantic components between English speech act verbs and their translation equivalents in Chinese, formal and accurate usage of English and Chinese directive speech act verbs in legal contexts. But notable differences have been identified in areas of difference between their usage in the original Chinese and English legal texts such as valency patterns and frequency of occurrences. For example, the subjects of some directive speech act verbs are very frequently omitted in Chinese legal texts, but this is not the case in English legal texts. One of the practicable methods to achieve adequacy and conciseness in speech act verb translation from Chinese into English in legal genre is to repeat the subjects or the message with discrepancy, and vice versa. In addition, translation effects such as overuse and underuse of certain directive speech act verbs are also found in the translated English texts compared to the original English texts. Legal texts constitute a particularly valuable material for speech act verb study. Building up such a contrastive picture of the Chinese and English speech act verbs in legal language would yield results of value and interest to legal translators and students of language for legal purposes and have practical application to legal translation between English and Chinese.

Keywords: contrastive analysis, corpus-based, directive speech act verbs, legal texts, translation between English and Chinese

Procedia PDF Downloads 486
18493 Downtime Estimation of Building Structures Using Fuzzy Logic

Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam

Abstract:

Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.

Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment

Procedia PDF Downloads 156
18492 Robust Design of a Ball Joint Considering Uncertainties

Authors: Bong-Su Sin, Jong-Kyu Kim, Se-Il Song, Kwon-Hee Lee

Abstract:

An automobile ball joint is a pivoting element used to allow rotational motion between the parts of the steering and suspension system. And it plays a role in smooth transmission of steering movement, also reduction in impact from the road surface. A ball joint is under various repeated loadings that may cause cracks and abrasion. This damages lead to safety problems of a car, as well as reducing the comfort of the driver's ride, and raise questions about the ball joint procedure and the whole durability of the suspension system. Accordingly, it is necessary to ensure the high durability and reliability of a ball joint. The structural responses of stiffness and pull-out strength were then calculated to check if the design satisfies the related requirements. The analysis was sequentially performed, following the caulking process. In this process, the deformation and stress results obtained from the analysis were saved. Sequential analysis has a strong advantage, in that it can be analyzed by considering the deformed shape and residual stress. The pull-out strength means the required force to pull the ball stud out from the ball joint assembly. The low pull-out strength can deteriorate the structural stability and safety performances. In this study, two design variables and two noise factors were set up. Two design variables were the diameter of a stud and the angle of a socket. And two noise factors were defined as the uncertainties of Young's modulus and yield stress of a seat. The DOE comprises 81 cases using these conditions. Robust design of a ball joint was performed using the DOE. The pull-out strength was generated from the uncertainties in the design variables and the design parameters. The purpose of robust design is to find the design with target response and smallest variation.

Keywords: ball joint, pull-out strength, robust design, design of experiments

Procedia PDF Downloads 415
18491 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling

Authors: M. Almutairi, S. Hadjiloucas

Abstract:

The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.

Keywords: harmonics, passive filter, power factor, power quality

Procedia PDF Downloads 302
18490 An Exploratory Approach of the Latin American Migrants’ Urban Space Transformation of Antofagasta City, Chile

Authors: Carolina Arriagada, Yasna Contreras

Abstract:

Since mid-2000, the migratory flows of Latin American migrants to Chile have been increasing constantly. There are two reasons that would explain why Chile is presented as an attractive country for the migrants. On the one hand, traditional centres of migrants’ attraction such as the United States and Europe have begun to close their borders. On the other hand, Chile exhibits relative economic and political stability, which offers greater job opportunities and better standard of living when compared to the migrants’ origin country. At the same time, the neoliberal economic model of Chile, developed under an extractive production of the natural resources, has privatized the urban space. The market regulates the growth of the fragmented and segregated cities. Then, the vulnerable population, most of the time, is located in the periphery and in the marginal areas of the urban space. In this aspect, the migrants have begun to occupy those degraded and depressed areas of the city. The problem raised is that the increase of the social spatial segregation could be also attributed to the migrants´ occupation of the marginal urban places of the city. The aim of this investigation is to carry out an analysis of the migrants’ housing strategies, which are transforming the marginal areas of the city. The methodology focused on the urban experience of the migrants, through the observation of spatial practices, ways of living and networks configuration in order to transform the marginal territory. The techniques applied in this study are semi–structured interviews in-depth interviews. The study reveals that the migrants housing strategies for living in the marginal areas of the city are built on a paradox way. On the one hand, the migrants choose proximity to their place of origin, maintaining their identity and customs. On the other hand, the migrants choose proximity to their social and familiar places, generating sense of belonging. In conclusion, the migration as international displacements under a globalized economic model increasing socio spatial segregation in cities is evidenced, but the transformation of the marginal areas is a fundamental resource of their integration migratory process. The importance of this research is that it is everybody´s responsibility not only the right to live in a city without any discrimination but also to integrate the citizens within the social urban space of a city.

Keywords: migrations, marginal space, resignification, visibility

Procedia PDF Downloads 137
18489 Identification of Indices to Quantify Gentrification

Authors: Sophy Ann Xavier, Lakshmi A

Abstract:

Gentrification is the process of altering a neighborhood's character through the influx of wealthier people and establishments. This idea has subsequently been expanded to encompass brand-new, high-status construction projects that involve regenerating brownfield sites or demolishing and rebuilding residential neighborhoods. Inequality is made worse by Gentrification in ways that go beyond socioeconomic position. The elderly, members of racial and ethnic minorities, individuals with disabilities, and mental health all suffer disproportionately when they are displaced. Cities must cultivate openness, diversity, and inclusion in their collaborations, as well as cooperation on objectives and results. The papers compiled in this issue concentrate on the new gentrification discussions, the rising residential allure of central cities, and the indices to measure this process according to its various varieties. The study makes an effort to fill the research gap in the area of gentrification studies, which is the absence of a set of indices for measuring Gentrification in a specific area. Studies on Gentrification that contain maps of historical change highlight trends that will aid in the production of displacement risk maps, which will guide future interventions by allowing residents and policymakers to extrapolate into the future. Additionally, these maps give locals a glimpse into the future of their communities and serve as a political call to action in areas where residents are expected to be displaced. This study intends to pinpoint metrics and approaches for measuring Gentrification that can then be applied to create a spatiotemporal map of a region and tactics for its inclusive planning. An understanding of various approaches will enable planners and policymakers to select the best approach and create the appropriate plans.

Keywords: gentrification, indices, methods, quantification

Procedia PDF Downloads 72
18488 CO₂ Conversion by Low-Temperature Fischer-Tropsch

Authors: Pauline Bredy, Yves Schuurman, David Farrusseng

Abstract:

To fulfill climate objectives, the production of synthetic e-fuels using CO₂ as a raw material appears as part of the solution. In particular, Power-to-Liquid (PtL) concept combines CO₂ with hydrogen supplied from water electrolysis, powered by renewable sources, which is currently gaining interest as it allows the production of sustainable fossil-free liquid fuels. The proposed process discussed here is an upgrading of the well-known Fischer-Tropsch synthesis. The concept deals with two cascade reactions in one pot, with first the conversion of CO₂ into CO via the reverse water gas shift (RWGS) reaction, which is then followed by the Fischer-Tropsch Synthesis (FTS). Instead of using a Fe-based catalyst, which can carry out both reactions, we have chosen the strategy to decouple the two functions (RWGS and FT) on two different catalysts within the same reactor. The FTS shall shift the equilibrium of the RWGS reaction (which alone would be limited to 15-20% of conversion at 250°C) by converting the CO into hydrocarbons. This strategy shall enable optimization of the catalyst pair and thus lower the temperature of the reaction thanks to the equilibrium shift to gain selectivity in the liquid fraction. The challenge lies in maximizing the activity of the RWGS catalyst but also in the ability of the FT catalyst to be highly selective. Methane production is the main concern as the energetic barrier of CH₄ formation is generally lower than that of the RWGS reaction, so the goal will be to minimize methane selectivity. Here we report the study of different combinations of copper-based RWGS catalysts with different cobalt-based FTS catalysts. We investigated their behaviors under mild process conditions by the use of high-throughput experimentation. Our results show that at 250°C and 20 bars, Cobalt catalysts mainly act as methanation catalysts. Indeed, CH₄ selectivity never drops under 80% despite the addition of various protomers (Nb, K, Pt, Cu) on the catalyst and its coupling with active RWGS catalysts. However, we show that the activity of the RWGS catalyst has an impact and can lead to longer hydrocarbons chains selectivities (C₂⁺) of about 10%. We studied the influence of the reduction temperature on the activity and selectivity of the tandem catalyst system. Similar selectivity and conversion were obtained at reduction temperatures between 250-400°C. This leads to the question of the active phase of the cobalt catalysts, which is currently investigated by magnetic measurements and DRIFTS. Thus, in coupling it with a more selective FT catalyst, better results are expected. This was achieved using a cobalt/iron FTS catalyst. The CH₄ selectivity dropped to 62% at 265°C, 20 bars, and a GHSV of 2500ml/h/gcat. We propose that the conditions used for the cobalt catalysts could have generated this methanation because these catalysts are known to have their best performance around 210°C in classical FTS, whereas the iron catalysts are more flexible but are also known to have an RWGS activity.

Keywords: cobalt-copper catalytic systems, CO₂-hydrogenation, Fischer-Tropsch synthesis, hydrocarbons, low-temperature process

Procedia PDF Downloads 51
18487 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 154
18486 The Mask of Motherhood a Changing Identity During the Transition to Motherhood

Authors: Geraldine Mc Loughlin, Mary Horgan, Rosaleen Murphy

Abstract:

Childbirth is a life-changing event, a psychological transition for the mother that must be viewed in a social context. Much has been written and documented regarding the preparation for birth and the immediate postnatal period, but the full psychological impact on the mother is not clear. One aspect of the transition process is Identity. Depending on a person’s worldview, the concept of identity is viewed differently; the nature of reality and how they construct knowledge influence these perspectives. Becoming a mother is not just an event but a process that time and experience will help to shape the understanding of the woman. To explore the emotional and psychological aspects of first-time mother’s experience during the transition to new motherhood. To identify factors affecting women’s identities in the period of 36 weeks gestation to 12 weeks postpartum. Interpretative Phenomenological Analysis (IPA) was used. It explores how these women make sense of and give meaning to their experiences. IPA is underpinned by 3 key principles: phenomenology, hermeneutics and idiographics. A purposeful sample of 10 women was recruited for this longitudinal study, to enable data to be collected during the transition to motherhood. Individual identity was interpreted and viewed as developing in response to changing contexts, such as the birth event becoming a parent, enabling one to construct one’s own sense of a meaningful life. Women effectively differentiated themselves from their personal and social identities and took responsibility for their actions. Identity is culturally and socially shaped and experienced, though not experienced similarly by all women. The individualized perspective on identity recognizes that (a) social influences are seen as external to the individual and (b) the view that social influences are, in fact, internalized by the individual.

Keywords: motherhood, transition, identity, IPA

Procedia PDF Downloads 48
18485 Acceptability Process of a Congestion Charge

Authors: Amira Mabrouk

Abstract:

This paper deals with the acceptability of urban toll in Tunisia. The price-based regulation, i.e. urban toll, is the outcome of a political process hampered by three-fold objectives: effectiveness, equity and social acceptability. This produces both economic interest groups and functions that are of incongruent preferences. The plausibility of this speculation goes hand in hand with the fact that these economic interest groups are also taxpayers who undeniably perceive urban toll as an additional charge. This wariness is coupled with an inquiry about the conditions of usage, the redistribution of the collected tax revenue and the idea of the leviathan state completes the picture. In a nutshell, if researches related to road congestion proliferate, no de facto legitimacy can be pleaded. Nonetheless, the theory on urban tolls engenders economists’ questioning of ways to reduce negative external effects linked to it. Only then does the urban toll appear to bear an answer to these issues. Undeniably, the urban toll suggests inherent conflicts due to the apparent no-payment principal of a public asset as well as to the social perception of the new measure as a mere additional charge. However, when the main concern is effectiveness is its broad sense and the social well-being, the main factors that determine the acceptability of such a tariff measure along with the type of incentives should be the object of a thorough, in-depth analysis. Before adopting this economic role, one has to recognize the factors that intervene in the acceptability of a congestion toll which brought about a copious number of articles and reports that lacked mostly solid theoretical content. It is noticeable that nowadays uncertainties float over the exact nature of the acceptability process. Accepting a congestion tariff could differ from one era to another, from one region to another and from one population to another, etc. Notably, this article, within a convenient time frame, attempts at bringing into focus a link between the social acceptability of the urban congestion toll and the value of time through a survey method barely employed in Tunisia, that of stated preference method. How can the urban toll, as a tax, be defined, justified and made acceptable? How can an equitable and effective tariff of congestion toll be reached? How can the costs of this urban toll be covered? In what way can we make the redistribution of the urban toll revenue visible and economically equitable? How can the redistribution of the revenue of urban toll compensate the disadvantaged while introducing such a tariff measure? This paper will offer answers to these research questions and it follows the line of contribution of JULES DUPUIT in 1844.

Keywords: congestion charge, social perception, acceptability, stated preferences

Procedia PDF Downloads 279
18484 Quorum-Sensing Driven Inhibitors for Mitigating Microbial Influenced Corrosion

Authors: Asma Lamin, Anna H. Kaksonen, Ivan Cole, Paul White, Xiao-Bo Chen

Abstract:

Microbiologically influenced corrosion (MIC) is a process in which microorganisms initiate, facilitate, or accelerate the electrochemical corrosion reactions of metallic components. Several reports documented that MIC accounts for about 20 to 40 % of the total cost of corrosion. Biofilm formation due to the presence of microorganisms on the surface of metal components is known to play a vital role in MIC, which can lead to severe consequences in various environmental and industrial settings. Quorum sensing (QS) system plays a major role in regulating biofilm formation and control the expression of some microbial enzymes. QS is a communication mechanism between microorganisms that involves the regulation of gene expression as a response to the microbial cell density within an environment. This process is employed by both Gram-positive and Gram-negative bacteria to regulate different physiological functions. QS involves production, detection, and responses to signalling chemicals, known as auto-inducers. QS controls specific processes important for the microbial community, such as biofilm formation, virulence factor expression, production of secondary metabolites and stress adaptation mechanisms. The use of QS inhibitors (QSIs) has been proposed as a possible solution to biofilm related challenges in many different applications. Although QSIs have demonstrated some strength in tackling biofouling, QSI-based strategies to control microbially influenced corrosion have not been thoroughly investigated. As such, our research aims to target the QS mechanisms as a strategy for mitigating MIC on metal surfaces in engineered systems.

Keywords: quorum sensing, quorum quenching, biofilm, biocorrosion

Procedia PDF Downloads 80
18483 Vibration Analysis and Optimization Design of Ultrasonic Horn

Authors: Kuen Ming Shu, Ren Kai Ho

Abstract:

Ultrasonic horn has the functions of amplifying amplitude and reducing resonant impedance in ultrasonic system. Its primary function is to amplify deformation or velocity during vibration and focus ultrasonic energy on the small area. It is a crucial component in design of ultrasonic vibration system. There are five common design methods for ultrasonic horns: analytical method, equivalent circuit method, equal mechanical impedance, transfer matrix method, finite element method. In addition, the general optimization design process is to change the geometric parameters to improve a single performance. Therefore, in the general optimization design process, we couldn't find the relation of parameter and objective. However, a good optimization design must be able to establish the relationship between input parameters and output parameters so that the designer can choose between parameters according to different performance objectives and obtain the results of the optimization design. In this study, an ultrasonic horn provided by Maxwide Ultrasonic co., Ltd. was used as the contrast of optimized ultrasonic horn. The ANSYS finite element analysis (FEA) software was used to simulate the distribution of the horn amplitudes and the natural frequency value. The results showed that the frequency for the simulation values and actual measurement values were similar, verifying the accuracy of the simulation values. The ANSYS DesignXplorer was used to perform Response Surface optimization, which could shows the relation of parameter and objective. Therefore, this method can be used to substitute the traditional experience method or the trial-and-error method for design to reduce material costs and design cycles.

Keywords: horn, natural frequency, response surface optimization, ultrasonic vibration

Procedia PDF Downloads 108
18482 An Exploratory Case Study of the Interference of Erotic Transference in the Longevity of Psychoanalytic Treatment

Authors: Mehravar Javid, Rohma Hassan, J. DeSilva

Abstract:

In this exploratory case study, a 37-year-old male patient who previously terminated treatment after four months of therapy with a different therapist begins anew with a 38-year-old female therapist and undergoes a similar cycle of premature termination, with added discourse caused by erotic transference. Process notes and records of the therapy treatment indicate that during the short course of treatment, the patient explored his difficulties navigating personal relationships, both current and past, and his difficulties coping with hypochondriasis. The therapist becomes tasked with not only navigating the patient’s inner conflict but also how she relates to the patient in the countertransference process while maintaining professional boundaries. This includes empathizing with the patient while also experiencing discomfort in the erotic transference from a professional standpoint. When the patient terminates once more, the therapist reflects on the possible reasons for termination. This includes the patient’s difficulties with tolerating interpretations, which cause him to blame himself for past events. These interpretations were also very frequent, contributing to the emotional burden the patient experienced. The therapist reflected on the use of interpretation versus exploration of the patient’s feelings and how exploring his feelings, including his feelings towards her, would have allowed for an opportunity to explore the emotions that troubled him more deeply. This includes exploring the patient’s anger and fear, which stem from unresolved conflicts from his childhood. Moreover, the erotic transference served as an enactment of previous experiences in which the patient feared losing what he loved, leading him to opt for premature termination instead of losing his ability to control the relationship and experience loss.

Keywords: countertransference, erotic transference, premature termination, therapist-client boundaries, transference

Procedia PDF Downloads 62
18481 Optimizing Hydrogen Production from Biomass Pyro-Gasification in a Multi-Staged Fluidized Bed Reactor

Authors: Chetna Mohabeer, Luis Reyes, Lokmane Abdelouahed, Bechara Taouk

Abstract:

In the transition to sustainability and the increasing use of renewable energy, hydrogen will play a key role as an energy carrier. Biomass has the potential to accelerate the realization of hydrogen as a major fuel of the future. Pyro-gasification allows the conversion of organic matter mainly into synthesis gas, or “syngas”, majorly constituted by CO, H2, CH4, and CO2. A second, condensable fraction of biomass pyro-gasification products are “tars”. Under certain conditions, tars may decompose into hydrogen and other light hydrocarbons. These conditions include two types of cracking: homogeneous cracking, where tars decompose under the effect of temperature ( > 1000 °C), and heterogeneous cracking, where catalysts such as olivine, dolomite or biochar are used. The latter process favors cracking of tars at temperatures close to pyro-gasification temperatures (~ 850 °C). Pyro-gasification of biomass coupled with water-gas shift is the most widely practiced process route for biomass to hydrogen today. In this work, an innovating solution will be proposed for this conversion route, in that all the pyro-gasification products, not only methane, will undergo processes that aim to optimize hydrogen production. First, a heterogeneous cracking step was included in the reaction scheme, using biochar (remaining solid from the pyro-gasification reaction) as catalyst and CO2 and H2O as gasifying agents. This process was followed by a catalytic steam methane reforming (SMR) step. For this, a Ni-based catalyst was tested under different reaction conditions to optimize H2 yield. Finally, a water-gas shift (WGS) reaction step with a Fe-based catalyst was added to optimize the H2 yield from CO. The reactor used for cracking was a fluidized bed reactor, and the one used for SMR and WGS was a fixed bed reactor. The gaseous products were analyzed continuously using a µ-GC (Fusion PN 074-594-P1F). With biochar as bed material, it was seen that more H2 was obtained with steam as a gasifying agent (32 mol. % vs. 15 mol. % with CO2 at 900 °C). CO and CH4 productions were also higher with steam than with CO2. Steam as gasifying agent and biochar as bed material were hence deemed efficient parameters for the first step. Among all parameters tested, CH4 conversions approaching 100 % were obtained from SMR reactions using Ni/γ-Al2O3 as a catalyst, 800 °C, and a steam/methane ratio of 5. This gave rise to about 45 mol % H2. Experiments about WGS reaction are currently being conducted. At the end of this phase, the four reactions are performed consecutively, and the results analyzed. The final aim is the development of a global kinetic model of the whole system in a multi-stage fluidized bed reactor that can be transferred on ASPEN PlusTM.

Keywords: multi-staged fluidized bed reactor, pyro-gasification, steam methane reforming, water-gas shift

Procedia PDF Downloads 131
18480 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 67
18479 Evaluation of Heavy Metal Contamination and Assessment of the Suitability of Water for Irrigation: A Case Study of the Sand River, Limpopo Province, South Africa

Authors: Ngonidzashe Moyo, Mmaditshaba Rapatsa

Abstract:

The primary objective of this study was to determine heavy metal contamination in the water, sediment, grass and fish in Sand River, South Africa. This river passes through an urban area and sewage effluent is discharged into it. Water from the Sand river is subsequently used for irrigation downstream of the sewage treatment works. The suitability of this water and the surrounding boreholes for irrigation was determined. This study was undertaken between January, 2014 and January, 2015. Monthly samples were taken from four sites. Sites 1 was upstream of the Polokwane Wastewater Treatment Plant, sites 2, 3 and 4 were downstream. Ten boreholes in the vicinity of the Sand River were randomly selected and the water was tested for heavy metal contamination. The concentration of heavy metals in Sand River water followed the order Mn>Fe>Pb>Cu≥Zn≥Cd. Manganese concentration averaged 0.34 mg/L. Heavy metal concentration in the sediment, grass and fish followed the order Fe>Mn>Zn>Cu>Pb>Cd. The bioaccumulation factor from grass to fish was highest in manganese (19.25), followed by zinc (16.39) and iron (14.14). Soil permeability index (PI) and sodium adsorption ratio (SAR) were used to determine the suitability of Sand River and borehole water for irrigation. The PI index for Sand River water was 75.1% and this indicates that Sand River water is suitable for irrigation of crops. The PI index for the borehole water ranged from 65.8-72.8% and again this indicates suitability of borehole water for crop irrigation. The sodium adsorption ratio also indicated that both Sand River and borehole water were suitable for irrigation. A risk assessment study is recommended to determine the suitability of the fish for human consumption.

Keywords: bioaccumulation, bioavailability, heavy metals, sodium adsorption ratio

Procedia PDF Downloads 214
18478 Effects of Different Mechanical Treatments on the Physical and Chemical Properties of Turmeric

Authors: Serpa A. M., Gómez Hoyos C., Velásquez-Cock J. A., Ruiz L. F., Vélez Acosta L. M., Gañan P., Zuluaga R.

Abstract:

Turmeric (Curcuma Longa L) is an Indian rhizome known for its biological properties, derived from its active compounds such as curcuminoids. Curcumin, the main polyphenol in turmeric, only represents around 3.5% of the dehydrated rhizome and extraction yields between 41 and 90% have been reported. Therefore, for every 1000 tons of turmeric powder used for the extraction of curcumin, around 970 tons of residues are generated. The present study evaluates the effect of different mechanical treatments (waring blender, grinder and high-pressure homogenization) on the physical and chemical properties of turmeric, as an alternative for the transformation of the entire rhizome. Suspensions of turmeric (10, 20 y 30%) were processed by waring blender during 3 min at 12000 rpm, while the samples treated by grinder were processed evaluating two different Gaps (-1 and -1,5). Finally, the process by high-pressure homogenization, was carried out at 500 bar. According to the results, the luminosity of the samples increases with the severity of the mechanical treatment, due to the stabilization of the color associated with the inactivation of the oxidative enzymes. Additionally, according to the microstructure of the samples, the process by grinder (Gap -1,5) and by high-pressure homogenization allowed the largest size reduction, reaching sizes up to 3 m (measured by optical microscopy). This processes disrupts the cells and breaks their fragments into small suspended particles. The infrared spectra obtained from the samples using an attenuated total reflectance accessory indicates changes in the 800-1200 cm⁻¹ region, related mainly to changes in the starch structure. Finally, the thermogravimetric analysis shows the presence of starch, curcumin and some minerals in the suspensions.

Keywords: characterization, mechanical treatments, suspensions, turmeric rhizome

Procedia PDF Downloads 159
18477 Review of the Model-Based Supply Chain Management Research in the Construction Industry

Authors: Aspasia Koutsokosta, Stefanos Katsavounis

Abstract:

This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.

Keywords: construction supply chain management, modeling, operations research, optimization, simulation

Procedia PDF Downloads 502
18476 The Hidden Role of Interest Rate Risks in Carry Trades

Authors: Jingwen Shi, Qi Wu

Abstract:

We study the role played interest rate risk in carry trade return in order to understand the forward premium puzzle. In this study, our goal is to investigate to what extent carry trade return is indeed due to compensation for risk taking and, more important, to reveal the nature of these risks. Using option data not only on exchange rates but also on interest rate swaps (swaptions), our first finding is that, besides the consensus currency risks, interest rate risks also contribute a non-negligible portion to the carry trade return. What strikes us is our second finding. We find that large downside risks of future exchange rate movements are, in fact, priced significantly in option market on interest rates. The role played by interest rate risk differs structurally from the currency risk. There is a unique premium associated with interest rate risk, though seemingly small in size, which compensates the tail risks, the left tail to be precise. On the technical front, our study relies on accurately retrieving implied distributions from currency options and interest rate swaptions simultaneously, especially the tail components of the two. For this purpose, our major modeling work is to build a new international asset pricing model where we use an orthogonal setup for pricing kernels and specify non-Gaussian dynamics in order to capture three sets of option skew accurately and consistently across currency options and interest rate swaptions, domestic and foreign, within one model. Our results open a door for studying forward premium anomaly through implied information from interest rate derivative market.

Keywords: carry trade, forward premium anomaly, FX option, interest rate swaption, implied volatility skew, uncovered interest rate parity

Procedia PDF Downloads 438