Search results for: time series
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7085

Search results for: time series

125 Ingenious Eco-Technology for Transforming Food and Tanneries Waste into a Soil Bio-Conditioner and Fertilizer Product Used for Recovery and Enhancement of the Productive Capacity of the Soil

Authors: Petre Voicu, Mircea Oaida, Radu Vasiu, Catalin Gheorghiu, Aurel Dumitru

Abstract:

The present work deals with the way in which food and tobacco waste can be used in agriculture. As a result of the lack of efficient technologies for their recycling, we are currently faced with the appearance of appreciable quantities of residual organic residues that find their use only very rarely and only after long storage in landfills. The main disadvantages of long storage of organic waste are the unpleasant smell, the high content of pathogenic agents, and the high content in the water. The release of these enormous amounts imperatively demands the finding of solutions to ensure the avoidance of environmental pollution. The measure practiced by us and presented in this paper consists of the processing of this waste in special installations, testing in pilot experimental perimeters, and later administration on agricultural lands without harming the quality of the soil, agricultural crops, and the environment. The current crisis of raw materials and energy also raises special problems in the field of organic waste valorization, an activity that takes place with low energy consumption. At the same time, their composition recommends them as useful secondary sources in agriculture. The transformation of food scraps and other residues concentrated organics thus acquires a new orientation, in which these materials are seen as important secondary resources. The utilization of food and tobacco waste in agriculture is also stimulated by the increasing lack of chemical fertilizers and the continuous increase in their price, under the conditions that the soil requires increased amounts of fertilizers in order to obtain high, stable, and profitable production. The need to maintain and increase the humus content of the soil is also taken into account, as an essential factor of its fertility, as a source and reserve of nutrients and microelements, as an important factor in increasing the buffering capacity of the soil, and the more reserved use of chemical fertilizers, improving the structure and permeability for water with positive effects on the quality of agricultural works and preventing the excess and/or deficit of moisture in the soil.

Keywords: Organic residue, food and tannery waste, fertilizer, soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179
124 Low Energy Technology for Leachate Valorisation

Authors: Jesús M. Martín, Francisco Corona, Dolores Hidalgo

Abstract:

Landfills present long-term threats to soil, air, groundwater and surface water due to the formation of greenhouse gases (methane gas and carbon dioxide) and leachate from decomposing garbage. The composition of leachate differs from site to site and also within the landfill. The leachates alter with time (from weeks to years) since the landfilled waste is biologically highly active and their composition varies. Mainly, the composition of the leachate depends on factors such as characteristics of the waste, the moisture content, climatic conditions, degree of compaction and the age of the landfill. Therefore, the leachate composition cannot be generalized and the traditional treatment models should be adapted in each case. Although leachate composition is highly variable, what different leachates have in common is hazardous constituents and their potential eco-toxicological effects on human health and on terrestrial ecosystems. Since leachate has distinct compositions, each landfill or dumping site would represent a different type of risk on its environment. Nevertheless, leachates consist always of high organic concentration, conductivity, heavy metals and ammonia nitrogen. Leachate could affect the current and future quality of water bodies due to uncontrolled infiltrations. Therefore, control and treatment of leachate is one of the biggest issues in urban solid waste treatment plants and landfills design and management. This work presents a treatment model that will be carried out "in-situ" using a cost-effective novel technology that combines solar evaporation/condensation plus forward osmosis. The plant is powered by renewable energies (solar energy, biomass and residual heat), which will minimize the carbon footprint of the process. The final effluent quality is very high, allowing reuse (preferred) or discharge into watercourses. In the particular case of this work, the final effluents will be reused for cleaning and gardening purposes. A minority semi-solid residual stream is also generated in the process. Due to its special composition (rich in metals and inorganic elements), this stream will be valorized in ceramic industries to improve the final products characteristics.

Keywords: Forward osmosis, landfills, leachate valorization, solar evaporation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955
123 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models

Authors: N. Mirzaei Varzeghani, M. Saffarzadeh, A. Naderan, A. Taheri

Abstract:

Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, more passengers aged 55 and older using this airport, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.

Keywords: Multimodal transportation, travel behavior, demand modeling, statistical models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 535
122 Analysis of Trend and Variability of Rainfall in the Mid-Mahanadi River Basin of Eastern India

Authors: Rabindra K. Panda, Gurjeet Singh

Abstract:

The major objective of this study was to analyze the trend and variability of rainfall in the middle Mahandi river basin located in eastern India. The trend of variation of extreme rainfall events has predominant effect on agricultural water management and extreme hydrological events such as floods and droughts. Mahanadi river basin is one of the major river basins of India having an area of 1,41,589 km2 and divided into three regions: Upper, middle and delta region. The middle region of Mahanadi river basin has an area of 48,700 km2 and it is mostly dominated by agricultural land, where agriculture is mostly rainfed. The study region has five Agro-climatic zones namely: East and South Eastern Coastal Plain, North Eastern Ghat, Western Undulating Zone, Western Central Table Land and Mid Central Table Land, which were numbered as zones 1 to 5 respectively for convenience in reporting. In the present study, analysis of variability and trends of annual, seasonal, and monthly rainfall was carried out, using the daily rainfall data collected from the Indian Meteorological Department (IMD) for 35 years (1979-2013) for the 5 agro-climatic zones. The long term variability of rainfall was investigated by evaluating the mean, standard deviation and coefficient of variation. The long term trend of rainfall was analyzed using the Mann-Kendall test on monthly, seasonal and annual time scales. It was found that there is a decreasing trend in the rainfall during the winter and pre monsoon seasons for zones 2, 3 and 4; whereas in the monsoon (rainy) season there is an increasing trend for zones 1, 4 and 5 with a level of significance ranging between 90-95%. On the other hand, the mean annual rainfall has an increasing trend at 99% significance level. The estimated seasonality index showed that the rainfall distribution is asymmetric and distributed over 3-4 months period. The study will help to understand the spatio-temporal variation of rainfall and to determine the correlation between the current rainfall trend and climate change scenario of the study region for multifarious use.

Keywords: Eastern India, long-term variability and trends, Mann-Kendall test, seasonality index, spatio-temporal variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
121 Collaborative and Experimental Cultures in Virtual Reality Journalism: From the Perspective of Content Creators

Authors: Radwa Mabrook

Abstract:

Virtual Reality (VR) content creation is a complex and an expensive process, which requires multi-disciplinary teams of content creators. Grant schemes from technology companies help media organisations to explore the VR potential in journalism and factual storytelling. Media organisations try to do as much as they can in-house, but they may outsource due to time constraints and skill availability. Journalists, game developers, sound designers and creative artists work together and bring in new cultures of work. This study explores the collaborative experimental nature of VR content creation, through tracing every actor involved in the process and examining their perceptions of the VR work. The study builds on Actor Network Theory (ANT), which decomposes phenomena into their basic elements and traces the interrelations among them. Therefore, the researcher conducted 22 semi-structured interviews with VR content creators between November 2017 and April 2018. Purposive and snowball sampling techniques allowed the researcher to recruit fact-based VR content creators from production studios and media organisations, as well as freelancers. Interviews lasted up to three hours, and they were a mix of Skype calls and in-person interviews. Participants consented for their interviews to be recorded, and for their names to be revealed in the study. The researcher coded interviews’ transcripts in Nvivo software, looking for key themes that correspond with the research questions. The study revealed that VR content creators must be adaptive to change, open to learn and comfortable with mistakes. The VR content creation process is very iterative because VR has no established work flow or visual grammar. Multi-disciplinary VR team members often speak different languages making it hard to communicate. However, adaptive content creators perceive VR work as a fun experience and an opportunity to learn. The traditional sense of competition and the strive for information exclusivity are now replaced by a strong drive for knowledge sharing. VR content creators are open to share their methods of work and their experiences. They target to build a collaborative network that aims to harness VR technology for journalism and factual storytelling. Indeed, VR is instilling collaborative and experimental cultures in journalism.

Keywords: Collaborative culture, content creation, experimental culture, virtual reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 792
120 Financial Regulations in the Process of Global Financial Crisis and Macroeconomics Impact of Basel III

Authors: M. Okan Tasar

Abstract:

Basel III (or the Third Basel Accord) is a global regulatory standard on bank capital adequacy, stress testing and market liquidity risk agreed upon by the members of the Basel Committee on Banking Supervision in 2010-2011, and scheduled to be introduced from 2013 until 2018. Basel III is a comprehensive set of reform measures. These measures aim to; (1) improve the banking sector-s ability to absorb shocks arising from financial and economic stress, whatever the source, (2) improve risk management and governance, (3) strengthen banks- transparency and disclosures. Similarly the reform target; (1) bank level or micro-prudential, regulation, which will help raise the resilience of individual banking institutions to periods of stress. (2) Macro-prudential regulations, system wide risk that can build up across the banking sector as well as the pro-cyclical implication of these risks over time. These two approaches to supervision are complementary as greater resilience at the individual bank level reduces the risk system wide shocks. Macroeconomic impact of Basel III; OECD estimates that the medium-term impact of Basel III implementation on GDP growth is in the range -0,05 percent to -0,15 percent per year. On the other hand economic output is mainly affected by an increase in bank lending spreads as banks pass a rise in banking funding costs, due to higher capital requirements, to their customers. Consequently the estimated effects on GDP growth assume no active response from monetary policy. Basel III impact on economic output could be offset by a reduction (or delayed increase) in monetary policy rates by about 30 to 80 basis points. The aim of this paper is to create a framework based on the recent regulations in order to prevent financial crises. Thus the need to overcome the global financial crisis will contribute to financial crises that may occur in the future periods. In the first part of the paper, the effects of the global crisis on the banking system examine the concept of financial regulations. In the second part; especially in the financial regulations and Basel III are analyzed. The last section in this paper explored the possible consequences of the macroeconomic impacts of Basel III.

Keywords: Banking Systems, Basel III, Financial regulation, Global Financial Crisis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2287
119 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Controlled Release of Doxorubicin

Authors: Parisa Shirzadeh

Abstract:

Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, natural and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer method. graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of CS, the amino reaction was performed to form amide transplantation, and the DOX was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX were characterized by FT-IR and TGA to recognize new functional groups which show the new bonding of CS to GO, RAMA and SEM to recognize size of layers that show changing in size and number of layers. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.

Keywords: Graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 233
118 The Evolution of Traditional Rhythms in Redefining the West African Country of Guinea

Authors: Janice Haworth, Karamoko Camara, Marie-Therèse Dramou, Kokoly Haba, Daniel Léno, Augustin Mara, Adama Noël Oulari, Silafa Tolno, Noël Zoumanigui

Abstract:

The traditional rhythms of the West African country of Guinea have played a centuries-long role in defining the different people groups that make up the country. Throughout their history, before and since colonization by the French, the different ethnicities have used their traditional music as a distinct part of their historical identities. That is starting to change. Guinea is an impoverished nation created in the early twentieth-century with little regard for the history and cultures of the people who were included. The traditional rhythms of the different people groups and their heritages have remained. Fifteen individual traditional Guinean rhythms were chosen to represent popular rhythms from the four geographical regions of Guinea. Each rhythm was traced back to its native village and video recorded on-site by as many different local performing groups as could be located. The cyclical patterns rhythms were transcribed via a circular, spatial design and then copied into a box notation system where sounds happening at the same time could be studied. These rhythms were analyzed for their consistency-overperformance in a Fundamental Rhythm Pattern analysis so rhythms could be compared for how they are changing through different performances. The analysis showed that the traditional rhythm performances of the Middle and Forest Guinea regions were the most cohesive and showed the least evidence of change between performances. The role of music in each of these regions is both limited and focused. The Coastal and High Guinea regions have much in common historically through their ethnic history and modern-day trade connections, but the rhythm performances seem to be less consistent and demonstrate more changes in how they are performed today. In each of these regions the role and usage of music is much freer and wide-spread. In spite of advances being made as a country, different ethnic groups still frequently only respond and participate (dance and sing) to the music of their native ethnicity. There is some evidence that this self-imposed musical barrier is beginning to change and evolve, partially through the development of better roads, more access to electricity and technology, the nationwide Ebola health crisis, and a growing self-identification as a unified nation.

Keywords: Cultural identity, Guinea, traditional rhythms, West Africa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
117 Identifying Temporary Housing Main Vertexes through Assessing Post-Disaster Recovery Programs

Authors: S. M. Amin Hosseini, Oriol Pons, Carmen Mendoza Arroyo, Albert de la Fuente

Abstract:

In the aftermath of a natural disaster, the major challenge most cities and societies face, regardless of their diverse level of prosperity, is to provide temporary housing (TH) for the displaced population (DP). However, the features of TH, which have been applied in previous recovery programs, greatly varied from case to case. This situation demonstrates that providing temporary accommodation for DP in a short period time and usually in great numbers is complicated in terms of satisfying all the beneficiaries’ needs, regardless of the societies’ welfare levels. Furthermore, when previously used strategies are applied to different areas, the chosen strategies are most likely destined to fail, unless the strategies are context and culturally based. Therefore, as the population of disaster-prone cities are increasing, decision-makers need a platform to help to determine all the factors, which caused the outcomes of the prior programs. To this end, this paper aims to assess the problems, requirements, limitations, potential responses, chosen strategies, and their outcomes, in order to determine the main elements that have influenced the TH process. In this regard, and in order to determine a customizable strategy, this study analyses the TH programs of five different cases as: Marmara earthquake, 1999; Bam earthquake, 2003; Aceh earthquake and tsunami, 2004; Hurricane Katrina, 2005; and, L’Aquila earthquake, 2009. The research results demonstrate that the main vertexes of TH are: (1) local characteristics, including local potential and affected population features, (2) TH properties, which needs to be considered in four phases: planning, provision/construction, operation, and second life, and (3) natural hazards impacts, which embraces intensity and type. Accordingly, this study offers decision-makers the opportunity to discover the main vertexes, their subsets, interactions, and the relation between strategies and outcomes based on the local conditions of each case. Consequently, authorities may acquire the capability to design a customizable method in the face of complicated post-disaster housing in the wake of future natural disasters.

Keywords: Post-disaster temporary accommodation, urban resilience, natural disaster, local characteristic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
116 Switching Studies on Ge15In5Te56Ag24 Thin Films

Authors: Diptoshi Roy, G. Sreevidya Varma, S. Asokan, Chandasree Das

Abstract:

Germanium Telluride based quaternary thin film switching devices with composition Ge15In5Te56Ag24, have been deposited in sandwich geometry on glass substrate with aluminum as top and bottom electrodes. The bulk glassy form of the said composition is prepared by melt quenching technique. In this technique, appropriate quantity of elements with high purity are taken in a quartz ampoule and sealed under a vacuum of 10-5 mbar. Then, it is allowed to rotate in a horizontal rotary furnace for 36 hours to ensure homogeneity of the melt. After that, the ampoule is quenched into a mixture of ice - water and NaOH to get the bulk ingot of the sample. The sample is then coated on a glass substrate using flash evaporation technique at a vacuum level of 10-6 mbar. The XRD report reveals the amorphous nature of the thin film sample and Energy - Dispersive X-ray Analysis (EDAX) confirms that the film retains the same chemical composition as that of the base sample. Electrical switching behavior of the device is studied with the help of Keithley (2410c) source-measure unit interfaced with Lab VIEW 7 (National Instruments). Switching studies, mainly SET (changing the state of the material from amorphous to crystalline) operation is conducted on the thin film form of the sample. This device is found to manifest memory switching as the device remains 'ON' even after the removal of the electric field. Also it is found that amorphous Ge15In5Te56Ag24 thin film unveils clean memory type of electrical switching behavior which can be justified by the absence of fluctuation in the I-V characteristics. The I-V characteristic also reveals that the switching is faster in this sample as no data points could be seen in the negative resistance region during the transition to on state and this leads to the conclusion of fast phase change during SET process. Scanning Electron Microscopy (SEM) studies are performed on the chosen sample to study the structural changes at the time of switching. SEM studies on the switched Ge15In5Te56Ag24 sample has shown some morphological changes at the place of switching wherein it can be explained that a conducting crystalline channel is formed in the device when the device switches from high resistance to low resistance state. From these studies it can be concluded that the material may find its application in fast switching Non-Volatile Phase Change Memory (PCM) Devices.

Keywords: Chalcogenides, vapor deposition, electrical switching, PCM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
115 Retrieval Augmented Generation against the Machine: Merging Human Cyber Security Expertise with Generative AI

Authors: Brennan Lodge

Abstract:

Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLMs is exciting, such models do have their downsides. LLMs cannot easily expand or revise their memory, and they cannot straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.

Keywords: Retrieval Augmented Generation, Governance Risk and Compliance, Cybersecurity, AI-driven Compliance, Risk Management, Generative AI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 130
114 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis

Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone

Abstract:

The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21C and 25C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.

Keywords: Dehumidification, nodal calculation, radiant cooling panel, system sizing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733
113 The Impact of Supply Chain Strategy and Integration on Supply Chain Performance: Supply Chain Vulnerability as a Moderator

Authors: Yi-Chun Kuo, Jo-Chieh Lin

Abstract:

The objective of a supply chain strategy is to reduce waste and increase efficiency to attain cost benefits, and to guarantee supply chain flexibility when facing the ever-changing market environment in order to meet customer requirements. Strategy implementation aims to fulfill common goals and attain benefits by integrating upstream and downstream enterprises, sharing information, conducting common planning, and taking part in decision making, so as to enhance the overall performance of the supply chain. With the rise of outsourcing and globalization, the increasing dependence on suppliers and customers and the rapid development of information technology, the complexity and uncertainty of the supply chain have intensified, and supply chain vulnerability has surged, resulting in adverse effects on supply chain performance. Thus, this study aims to use supply chain vulnerability as a moderating variable and apply structural equation modeling (SEM) to determine the relationships among supply chain strategy, supply chain integration, and supply chain performance, as well as the moderating effect of supply chain vulnerability on supply chain performance. The data investigation of this study was questionnaires which were collected from the management level of enterprises in Taiwan and China, 149 questionnaires were received. The result of confirmatory factor analysis shows that the path coefficients of supply chain strategy on supply chain integration and supply chain performance are positive (0.497, t= 4.914; 0.748, t= 5.919), having a significantly positive effect. Supply chain integration is also significantly positively correlated to supply chain performance (0.192, t = 2.273). The moderating effects of supply chain vulnerability on supply chain strategy and supply chain integration to supply chain performance are significant (7.407; 4.687). In Taiwan, 97.73% of enterprises are small- and medium-sized enterprises (SMEs) focusing on receiving original equipment manufacturer (OEM) and original design manufacturer (ODM) orders. In order to meet the needs of customers and to respond to market changes, these enterprises especially focus on supply chain flexibility and their integration with the upstream and downstream enterprises. According to the observation of this research, the effect of supply chain vulnerability on supply chain performance is significant, and so enterprises need to attach great importance to the management of supply chain risk and conduct risk analysis on their suppliers in order to formulate response strategies when facing emergency situations. At the same time, risk management is incorporated into the supply chain so as to reduce the effect of supply chain vulnerability on the overall supply chain performance.

Keywords: Supply chain integration, supply chain performance, supply chain vulnerability, structural equation modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 905
112 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment

Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu

Abstract:

Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.

Keywords: Dissolvable magnesium, coating, plasma electrolytic oxide, sealer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 582
111 The Mechanism Underlying Empathy-Related Helping Behavior: An Investigation of Empathy-Attitude- Action Model

Authors: Wan-Ting Liao, Angela K. Tzeng

Abstract:

Empathy has been an important issue in psychology, education, as well as cognitive neuroscience. Empathy has two major components: cognitive and emotional. Cognitive component refers to the ability to understand others’ perspectives, thoughts, and actions, whereas emotional component refers to understand how others feel. Empathy can be induced, attitude can then be changed, and with enough attitude change, helping behavior can occur. This finding leads us to two questions: is attitude change really necessary for prosocial behavior? And, what roles cognitive and affective empathy play? For the second question, participants with different psychopathic personality (PP) traits are critical because high PP people were found to suffer only affective empathy deficit. Their cognitive empathy shows no significant difference from the control group. 132 college students voluntarily participated in the current three-stage study. Stage 1 was to collect basic information including Interpersonal Reactivity Index (IRI), Psychopathic Personality Inventory-Revised (PPI-R), Attitude Scale, Visual Analogue Scale (VAS), and demographic data. Stage two was for empathy induction with three controversial scenarios, namely domestic violence, depression with a suicide attempt, and an ex-offender. Participants read all three stories and then rewrite the stories by one of two perspectives (empathetic vs. objective). They would then complete the VAS and Attitude Scale one more time for their post-attitude and emotional status. Three IVs were introduced for data analysis: PP (High vs. Low), Responsibility (whether or not the character is responsible for what happened), and Perspective-taking (Empathic vs. Objective). Stage 3 was for the action. Participants were instructed to freely use the 17 tokens they received as donations. They were debriefed and interviewed at the end of the experiment. The major findings were people with higher empathy tend to take more action in helping. Attitude change is not necessary for prosocial behavior. The controversy of the scenarios and how familiar participants are towards target groups play very important roles. Finally, people with high PP tend to show more public prosocial behavior due to their affective empathy deficit. Pre-existing value and belief as well as recent dramatic social events seem to have a big impact and possibly reduce the effect of the independent variables (IV) in our paradigm.

Keywords: Affective empathy, attitude, cognitive empathy, prosocial behavior, psychopathic traits.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 714
110 Reconsidering the Palaeo-Environmental Reconstruction of the Wet Zone of Sri Lanka: A Zooarchaeological Perspective

Authors: Kalangi Rodrigo, Kelum Manamendra-Arachchi

Abstract:

Bones, teeth, and shells have been acknowledged over the last two centuries as evidence of chronology, Palaeo-environment, and human activity. Faunal traces are valid evidence of past situations because they have properties that have not changed over long periods. Sri Lanka has been known as an Island, which has a diverse variety of prehistoric occupation among ecological zones. Defining the Paleoecology of the past societies has been an archaeological thought developed in the 1960s. It is mainly concerned with the reconstruction from available geological and biological evidence of past biota, populations, communities, landscapes, environments, and ecosystems. This early and persistent human fossil, technical, and cultural florescence, as well as a collection of well-preserved tropical-forest rock shelters with associated 'on-site ' Palaeoenvironmental records, makes Sri Lanka a central and unusual case study to determine the extent and strength of early human tropical forest encounters. Excavations carried out in prehistoric caves in the low country wet zone has shown that in the last 50,000 years, the temperature in the lowland rainforests has not exceeded 5 degrees. Based on Semnopithecus Priam (Gray Langur) remains unearthed from wet zone prehistoric caves, it has been argued periods of momentous climate changes during the Last Glacial Maximum (LGM) and Terminal Pleistocene/Early Holocene boundary, with a recognizable preference for semi-open ‘Intermediate’ rainforest or edges. Continuous genus Acavus and Oligospira occupation along with uninterrupted horizontal pervasive of Canarium sp. (‘kekuna’ nut) have proven that temperatures in the lowland rain forests have not changed by at least 5 °C over the last 50,000 years. Site catchment or territorial analysis cannot be any longer defensible, due to time-distance based factors as well as optimal foraging theory failed as a consequence of prehistoric people were aware of the decrease in cost-benefit ratio and located sites, and generally played out a settlement strategy that minimized the ratio of energy expended to energy produced.

Keywords: Palaeo-environment, palaeo-ecology, palaeo-climate, prehistory, zooarchaeology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 740
109 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: Metagenomics, phenotype prediction, deep learning, embeddings, multiple instance learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 911
108 Urban Accessibility of Historical Cities: The Venetian Case Study

Authors: Valeria Tatano, Francesca Guidolin, Francesca Peltrera

Abstract:

The preservation of historical Italian heritage, at the urban and architectural scale, has to consider restrictions and requirements connected with conservation issues and usability needs, which are often at odds with historical heritage preservation. Recent decades have been marked by the search for increased accessibility not only of public and private buildings, but to the whole historical city, also for people with disability. Moreover, in the last years the concepts of Smart City and Healthy City seek to improve accessibility both in terms of mobility (independent or assisted) and fruition of goods and services, also for historical cities. The principles of Inclusive Design have introduced new criteria for the improvement of public urban space, between current regulations and best practices. Moreover, they have contributed to transforming “special needs” into an opportunity of social innovation. These considerations find a field of research and analysis in the historical city of Venice, which is at the same time a site of UNESCO world heritage, a mass tourism destination bringing in visitors from all over the world and a city inhabited by an aging population. Due to its conformation, Venetian urban fabric is only partially accessible: about four thousand bridges divide thousands of islands, making it almost impossible to move independently. These urban characteristics and difficulties were the base, in the last 20 years, for several researches, experimentations and solutions with the aim of eliminating architectural barriers, in particular for the usability of bridges. The Venetian Municipality with the EBA Office and some external consultants realized several devices (e.g. the “stepped ramp” and the new accessible ramps for the Venice Marathon) that should determine an innovation for the city, passing from the use of mechanical replicable devices to specific architectural projects in order to guarantee autonomy in use. This paper intends to present the state-of-the-art in bridges accessibility, through an analysis based on Inclusive Design principles and on the current national and regional regulation. The purpose is to evaluate some possible strategies that could improve performances, between limits and possibilities of interventions. The aim of the research is to lay the foundations for the development of a strategic program for the City of Venice that could successfully bring together both conservation and improvement requirements.

Keywords: Accessibility and inclusive design, historical heritage preservation, technological and social innovation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384
107 Records of Lepidopteron Borers (Lepidoptera) on Stored Seeds of Indian Himalayan Conifers

Authors: Pawan Kumar, Pitamber Singh Negi

Abstract:

Many of the regeneration failures in conifers are often being attributed to heavy insect attack and pathogens during the period of seed formation and under storage conditions. Conifer berries and seed insects occur throughout the known range of the hosts and also limit the production of seed for nursery stock. On occasion, even entire seed crops are lost due to insect attacks. The berry and seeds of both the species have been found to be infected with insects. Recently, heavy damage to the berry and seeds of Juniper and Chilgoza Pine was observed in the field as well as in stored conditions, leading to reduction in the viability of seeds to germinate. Both the species are under great threat and regeneration of the species is very low. Due to lack of adequate literature, the study on the damage potential of seed insects was urgently required to know the exact status of the insect-pests attacking seeds/berries of both the pine species so as to develop pest management practices against the insect pests attack. As both the species are also under threat and are fighting for survival, so the study is important to develop management practices for the insect-pests of seeds/berries of Juniper and Chilgoza pine so as to evaluate in the nursery, as these species form major vegetation of their distribution zones. A six-year study on the management of insect pests of seeds of Chilgoza revealed that seeds of this species are prone to insect pests mainly borers. During present investigations, it was recorded that cones of are heavily attacked only by Dioryctria abietella (Lepidoptera: Pyralidae) in natural conditions, but seeds which are economically important are heavily infected, (sometimes up to 100% damage was also recorded) by insect borer, Plodia interpunctella (Lepidoptera: Pyralidae) and is recorded for the first time ‘to author’s best knowledge’ infesting the stored Chilgoza seeds. Similarly, Juniper berries and seeds were heavily attacked only by a single borer, Homaloxestis cholopis (Lepidoptera: Lecithoceridae) recorded as a new report in natural habitat as well as in stored conditions. During the present investigation details of insect pest attack on Juniper and Chilgoza pine seeds and berries was observed and suitable management practices were also developed to contain the insect-pests attack.

Keywords: Borer, conifer, cones, chilgoza pine, lepidoptera, juniper, management, seed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 868
106 Development of a Miniature and Low-Cost IoT-Based Remote Health Monitoring Device

Authors: Sreejith Jayachandran, Mojtaba Ghodsi, Morteza Mohammadzaheri

Abstract:

The modern busy world is running behind new embedded technologies based on computers and software meanwhile some people are unable to monitor their health condition and regular medical check-ups. Some of them postpone medical check-ups due to a lack of time and convenience while others skip these regular evaluations and medical examinations due to huge medical bills and hospital expenses. In this research, we present a device in the telemonitoring system capable of monitoring, checking, and evaluating the health status of the human body remotely through the internet for the needs of all kinds of people. The remote health monitoring device is a microcontroller-based embedded unit. The various types of sensors in this device are connected to the human body, and with the help of an Arduino UNO board, the required analogue data are collected from the sensors. The microcontroller on the Arduino board processes the analogue data collected in this way into digital data and transfers that information to the cloud and stores it there; the processed digital data are then instantly displayed through the LCD attached to the machine. By accessing the cloud storage with a username and password, the concerned person’s health care teams/doctors, and other health staff can collect these data for the assessment and follow-up of that patient. Besides that, the family members/guardians can use and evaluate these data for awareness of the patient's current health status. Moreover, the system is connected to a GPS module. In emergencies, the concerned team can be positioning the patient or the person with this device. The setup continuously evaluates and transfers the data to the cloud and also the user can prefix a normal value range for the evaluation. For example, the blood pressure normal value is universally prefixed between 80/120 mmHg. Similarly, the Remote Health Monitoring System (RHMS) is also allowed to fix the range of values referred to as normal coefficients. This IoT-based miniature system 11×10×10 cm3 with a low weight of 500 gr only consumes 10 mW. This smart monitoring system is manufactured for 100 GBP (British Pound Sterling), and can facilitate the communication between patients and health systems, but also it can be employed for numerous other uses including communication sectors in the aerospace and transportation systems.

Keywords: Embedded Technology, Telemonitoring system, Microcontroller, Arduino UNO, Cloud storage, GPS, RHMS, Remote Health Monitoring System, Alert system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 264
105 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: Optimal control, ensemble Kalman Filter, topography reconstruction, data assimilation, shallow water equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 679
104 Bone Mineral Density and Trabecular Bone Score in Ukrainian Women with Obesity

Authors: Vladyslav Povoroznyuk, Nataliia Dzerovych, Larysa Martynyuk, Tetiana Kovtun

Abstract:

Obesity and osteoporosis are the two diseases whose increasing prevalence and high impact on the global morbidity and mortality, during the two recent decades, have gained a status of major health threats worldwide. Obesity purports to affect the bone metabolism through complex mechanisms. Debated data on the connection between the bone mineral density and fracture prevalence in the obese patients are widely presented in literature. There is evidence that the correlation of weight and fracture risk is sitespecific. This study is aimed at determining the connection between the bone mineral density (BMD) and trabecular bone score (TBS) parameters in Ukrainian women suffering from obesity. We examined 1025 40-89-year-old women, divided them into the groups according to their body mass index: Group A included 360 women with obesity whose BMI was ≥30 kg/m2, and Group B – 665 women with no obesity and BMI of <30 kg/m2. The BMD of total body, lumbar spine at the site L1-L4, femur and forearm were measured by DXA (Prodigy, GEHC Lunar, Madison, WI, USA). The TBS of L1- L4 was assessed by means of TBS iNsight® software installed on our DXA machine (product of Med-Imaps, Pessac, France). In general, obese women had a significantly higher BMD of lumbar spine, femoral neck, proximal femur, total body and ultradistal forearm (p<0.001) in comparison with women without obesity. The TBS of L1-L4 was significantly lower in obese women compared to nonobese women (p<0.001). The BMD of lumbar spine, femoral neck and total body differed to a significant extent in women of 40-49, 50- 59, 60-69 and 70-79 years (p<0.05). At same time, in women aged 80-89 years the BMD of lumbar spine (p=0.09), femoral neck (p=0.22) and total body (p=0.06) barely differed. The BMD of ultradistal forearm was significantly higher in women of all age groups (p<0.05). The TBS of L1-L4 in all the age groups tended to reveal the lower parameters in obese women compared with the nonobese; however, those data were not statistically significant. By contrast, a significant positive correlation was observed between the fat mass and the BMD at different sites. The correlation between the fat mass and TBS of L1-L4 was also significant, although negative. Women with vertebral fractures had a significantly lower body weight, body mass index and total body fat mass in comparison with women without vertebral fractures in their anamnesis. In obese women the frequency of vertebral fractures was 27%, while in women without obesity – 57%.

Keywords: Bone mineral density, trabecular bone score, obesity, women.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691
103 International Financial Crises and the Political Economy of Financial Reforms in Turkey: 1994-2009

Authors: Birgül Şakar

Abstract:

This study1 holds for the formation of international financial crisis and political factors for economic crisis in Turkey, are evaluated in chronological order. The international arena and relevant studies conducted in Turkey work in the literature are assessed. The main purpose of the study is to hold the linkage between the crises and political stability in Turkey in details, and to examine the position of Turkey in this regard. The introduction part follows the literature survey on the models explaining causes and results of the crises, the second part of the study. In the third part, the formations of the world financial crises are studied. The fourth part, financial crisis in Turkey in 1994, 2000, 2001 and 2008 are reviewed and their political reasons are analyzed. In the last part of the study the results and recommendations are held. Political administrations have laid the grounds for an economic crisis in Turkey. In this study, the emergence of an economic crisis in Turkey and the developments after the crisis are chronologically examined and an explanation is offered as to the cause and effect relationship between the political administration and economic equilibrium in the country. Economic crises can be characterized as follows: high prices of consumables, high interest rates, current account deficits, budget deficits, structural defects in government finance, rising inflation and fixed currency applications, rising government debt, declining savings rates and increased dependency on foreign capital stock. Entering into the conditions of crisis during a time when the exchange value of the country-s national currency was rising, speculative finance movements and shrinking of foreign currency reserves happened due to expectations for devaluation and because of foreign investors- resistance to financing national debt, and a financial risk occurs. During the February 2001 crisis and immediately following, devaluation and reduction of value occurred in Turkey-s stock market. While changing over to the system of floating exchange rates in the midst of this crisis, the effects of the crisis on the real economy are discussed in this study. Administered politics include financial reforms, such as the rearrangement of banking systems. These reforms followed with the provision of foreign financial support. There have been winners and losers in the imbalance of income distribution, which has recently become more evident in Turkey-s fragile economy.

Keywords: Economics, marketing crisis, financial reforms, political economy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
102 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction

Authors: M. D. Haneef, R. B. Randall, Z. Peng

Abstract:

Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time, and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration-based analysis and wear prediction. In present study, a simulation model was developed to investigate the bearing wear behaviour, resulting because of different operating conditions, to complement the vibration analysis. In current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. In addition, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journals and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 μm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behaviour and on the other hand it also helps to establish a co-relation between wear based and vibration based analysis. Therefore, the model provides a cost effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.

Keywords: Condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2300
101 Investigation of VMAT Algorithms and Dosimetry

Authors: A. Taqaddas

Abstract:

Purpose: Planning and dosimetry of different VMAT algorithms (SmartArc, Ergo++, Autobeam) is compared with IMRT for Head and Neck Cancer patients. Modelling was performed to rule out the causes of discrepancies between planned and delivered dose. Methods: Five HNC patients previously treated with IMRT were re-planned with SmartArc (SA), Ergo++ and Autobeam. Plans were compared with each other and against IMRT and evaluated using DVHs for PTVs and OARs, delivery time, monitor units (MU) and dosimetric accuracy. Modelling of control point (CP) spacing, Leaf-end Separation and MLC/Aperture shape was performed to rule out causes of discrepancies between planned and delivered doses. Additionally estimated arc delivery times, overall plan generation times and effect of CP spacing and number of arcs on plan generation times were recorded. Results: Single arc SmartArc plans (SA4d) were generally better than IMRT and double arc plans (SA2Arcs) in terms of homogeneity and target coverage. Double arc plans seemed to have a positive role in achieving improved Conformity Index (CI) and better sparing of some Organs at Risk (OARs) compared to Step and Shoot IMRT (ss-IMRT) and SA4d. Overall Ergo++ plans achieved best CI for both PTVs. Dosimetric validation of all VMAT plans without modelling was found to be lower than ss-IMRT. Total MUs required for delivery were on average 19%, 30%, 10.6% and 6.5% lower than ss-IMRT for SA4d, SA2d (Single arc with 20 Gantry Spacing), SA2Arcs and Autobeam plans respectively. Autobeam was most efficient in terms of actual treatment delivery times whereas Ergo++ plans took longest to deliver. Conclusion: Overall SA single arc plans on average achieved best target coverage and homogeneity for both PTVs. SA2Arc plans showed improved CI and some OARs sparing. Very good dosimetric results were achieved with modelling. Ergo++ plans achieved best CI. Autobeam resulted in fastest treatment delivery times.

Keywords: Dosimetry, Intensity Modulated Radiotherapy, Optimization Algorithms, Volumetric Modulated Arc Therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3318
100 Analyzing the Perception of Social Networking Sites as a Learning Tool among University Students: Case Study of a Business School in India

Authors: Bhaskar Basu

Abstract:

Universities and higher education institutes are finding it increasingly difficult to engage students fruitfully through traditional pedagogic tools. Web 2.0 technologies comprising social networking sites (SNSs) offer a platform for students to collaborate and share information, thereby enhancing their learning experience. Despite the potential and reach of SNSs, its use has been limited in academic settings promoting higher education. The purpose of this paper is to assess the perception of social networking sites among business school students in India and analyze its role in enhancing quality of student experiences in a business school leading to the proposal of an agenda for future research. In this study, more than 300 students of a reputed business school were involved in a survey of their preferences of different social networking sites and their perceptions and attitudes towards these sites. A questionnaire with three major sections was designed, validated and distributed among  a sample of students, the research method being descriptive in nature. Crucial questions were addressed to the students concerning time commitment, reasons for usage, nature of interaction on these sites, and the propensity to share information leading to direct and indirect modes of learning. It was further supplemented with focus group discussion to analyze the findings. The paper notes the resistance in the adoption of new technology by a section of business school faculty, who are staunch supporters of the classical “face-to-face” instruction. In conclusion, social networking sites like Facebook and LinkedIn provide new avenues for students to express themselves and to interact with one another. Universities could take advantage of the new ways  in which students are communicating with one another. Although interactive educational options such as Moodle exist, social networking sites are rarely used for academic purposes. Using this medium opens new ways of academically-oriented interactions where faculty could discover more about students' interests, and students, in turn, might express and develop more intellectual facets of their lives. hitherto unknown intellectual facets.  This study also throws up the enormous potential of mobile phones as a tool for “blended learning” in business schools going forward.

Keywords: Business school, India, learning, social media, social networking, university.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1428
99 Stress-Strain Relation for Hybrid Fiber Reinforced Concrete at Elevated Temperature

Authors: Josef Novák, Alena Kohoutková

Abstract:

The performance of concrete structures in fire depends on several factors which include, among others, the change in material properties due to the fire. Today, fiber reinforced concrete (FRC) belongs to materials which have been widely used for various structures and elements. While the knowledge and experience with FRC behavior under ambient temperature is well-known, the effect of elevated temperature on its behavior has to be deeply investigated. This paper deals with an experimental investigation and stress‑strain relations for hybrid fiber reinforced concrete (HFRC) which contains siliceous aggregates, polypropylene and steel fibers. The main objective of the experimental investigation is to enhance a database of mechanical properties of concrete composites with addition of fibers subject to elevated temperature as well as to validate existing stress-strain relations for HFRC. Within the investigation, a unique heat transport test, compressive test and splitting tensile test were performed on 150 mm cubes heated up to 200, 400, and 600 °C with the aim to determine a time period for uniform heat distribution in test specimens and the mechanical properties of the investigated concrete composite, respectively. Both findings obtained from the presented experimental test as well as experimental data collected from scientific papers so far served for validating the computational accuracy of investigated stress-strain relations for HFRC which have been developed during last few years. Owing to the presence of steel and polypropylene fibers, HFRC becomes a unique material whose structural performance differs from conventional plain concrete when exposed to elevated temperature. Polypropylene fibers in HFRC lower the risk of concrete spalling as the fibers burn out shortly with increasing temperature due to low ignition point and as a consequence pore pressure decreases. On the contrary, the increase in the concrete porosity might affect the mechanical properties of the material. To validate this thought requires enhancing the existing result database which is very limited and does not contain enough data. As a result of the poor database, only few stress-strain relations have been developed so far to describe the structural performance of HFRC at elevated temperature. Moreover, many of them are inconsistent and need to be refined. Most of them also do not take into account the effect of both a fiber type and fiber content. Such approach might be vague especially when high amount of polypropylene fibers are used. Therefore, the existing relations should be validated in detail based on other experimental results.

Keywords: Elevated temperature, fiber reinforced concrete, mechanical properties, stress strain relation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1123
98 Depth-Averaged Modelling of Erosion and Sediment Transport in Free-Surface Flows

Authors: Thomas Rowan, Mohammed Seaid

Abstract:

A fast finite volume solver for multi-layered shallow water flows with mass exchange and an erodible bed is developed. This enables the user to solve a number of complex sediment-based problems including (but not limited to), dam-break over an erodible bed, recirculation currents and bed evolution as well as levy and dyke failure. This research develops methodologies crucial to the under-standing of multi-sediment fluvial mechanics and waterway design. In this model mass exchange between the layers is allowed and, in contrast to previous models, sediment and fluid are able to transfer between layers. In the current study we use a two-step finite volume method to avoid the solution of the Riemann problem. Entrainment and deposition rates are calculated for the first time in a model of this nature. In the first step the governing equations are rewritten in a non-conservative form and the intermediate solutions are calculated using the method of characteristics. In the second stage, the numerical fluxes are reconstructed in conservative form and are used to calculate a solution that satisfies the conservation property. This method is found to be considerably faster than other comparative finite volume methods, it also exhibits good shock capturing. For most entrainment and deposition equations a bed level concentration factor is used. This leads to inaccuracies in both near bed level concentration and total scour. To account for diffusion, as no vertical velocities are calculated, a capacity limited diffusion coefficient is used. The additional advantage of this multilayer approach is that there is a variation (from single layer models) in bottom layer fluid velocity: this dramatically reduces erosion, which is often overestimated in simulations of this nature using single layer flows. The model is used to simulate a standard dam break. In the dam break simulation, as expected, the number of fluid layers utilised creates variation in the resultant bed profile, with more layers offering a higher deviation in fluid velocity . These results showed a marked variation in erosion profiles from standard models. The overall the model provides new insight into the problems presented at minimal computational cost.

Keywords: Erosion, finite volume method, sediment transport, shallow water equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 990
97 Work-Related Shoulder Lesions and Labor Lawsuits in Brazil: Cross-Sectional Study on Worker Health Actions Developed by Employers

Authors: Reinaldo Biscaro, Luciano R. Ferreira, Leonardo C. Biscaro, Raphael C. Biscaro, Isabela S. Vasconcelos, Laura C. R. Ferreira, Cristiano M. Galhardi, Erica P. Baciuk

Abstract:

Introduction: The present study had the objective to present the profile of workers with shoulder disorders related to labor lawsuits in Brazil. The study analyzed the association between the worker’s health and the actions performed by the companies related to injured professional. The research method performed a retrospective, cross-sectional and quantitative database analysis. The documents of labor lawsuits with shoulder injury registered at the Regional Labor Court in the 15th region (Campinas - São Paulo) were submitted to the medical examination and evaluated during the period from 2012 until 2015. The data collected were age, gender, onset of symptoms, length of service, current occupation, type of shoulder injury, referred complaints, type of acromion, associated or related diseases, company actions as CAT (workplace accident communication), compliance of NR7 by the organization (Environmental Risk Prevention Program - PPRA and Medical Coordination Program in Occupational Health - PCMSO). Results: From the 93 workers evaluated, there was a prevalence of men (58.1%), with a mean age of 42.6 y-o, and 54.8% were included in the age group 35-49 years. Regarding the length of work time in the company, 66.7% have worked for more than 5 years. There was an association between gender and current occupational status (p < 0.005), with predominance of women in household occupation (13 vs. 2) and predominance of unemployed men in job search situation (24 vs. 10) and reintegrated to work by judicial decision (8 vs. 2). There was also a correlation between pain and functional limitation (p < 0.01). There was a positive association of PPRA with the complaint of functional limitation and negative association with pain (p < 0.04). There was also a correlation between the sedentary lifestyle and the presence of PCMSO and PPRA (p < 0.04), and the absence of CAT in the companies (p < 0.001). It was concluded that the appearance or aggravation of osseous and articular shoulder pathologies in workers who have undertaken labor law suits seem to be associated with individual habits or inadequate labor practices. These data can help preventing the occurrence of these lesions by implementing local health promotion policies at work.

Keywords: Work-related accidents, cross-sectional study, shoulder lesions, labor lawsuits.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 901
96 Comparative Study of Equivalent Linear and Non-Linear Ground Response Analysis for Rapar District of Kutch, India

Authors: Kulin Dave, Kapil Mohan

Abstract:

Earthquakes are considered to be the most destructive rapid-onset disasters human beings are exposed to. The amount of loss it brings in is sufficient to take careful considerations for designing of structures and facilities. Seismic Hazard Analysis is one such tool which can be used for earthquake resistant design. Ground Response Analysis is one of the most crucial and decisive steps for seismic hazard analysis. Rapar district of Kutch, Gujarat falls in Zone 5 of earthquake zone map of India and thus has high seismicity because of which it is selected for analysis. In total 8 bore-log data were studied at different locations in and around Rapar district. Different soil engineering properties were analyzed and relevant empirical correlations were used to calculate maximum shear modulus (Gmax) and shear wave velocity (Vs) for the soil layers. The soil was modeled using Pressure-Dependent Modified Kodner Zelasko (MKZ) model and the reference curve used for fitting was Seed and Idriss (1970) for sand and Darendeli (2001) for clay. Both Equivalent linear (EL), as well as Non-linear (NL) ground response analysis, has been carried out with Masing Hysteretic Re/Unloading formulation for comparison. Commercially available DEEPSOIL v. 7.0 software is used for this analysis. In this study an attempt is made to quantify ground response regarding generated acceleration time-history at top of the soil column, Response spectra calculation at 5 % damping and Fourier amplitude spectrum calculation. Moreover, the variation of Peak Ground Acceleration (PGA), Maximum Displacement, Maximum Strain (in %), Maximum Stress Ratio, Mobilized Shear Stress with depth is also calculated. From the study, PGA values estimated in rocky strata are nearly same as bedrock motion and marginal amplification is observed in sandy silt and silty clays by both analyses. The NL analysis gives conservative results of maximum displacement as compared to EL analysis. Maximum strain predicted by both studies is very close to each other. And overall NL analysis is more efficient and realistic because it follows the actual hyperbolic stress-strain relationship, considers stiffness degradation and mobilizes stresses generated due to pore water pressure.

Keywords: DEEPSOIL v 7.0, Ground Response Analysis, Pressure-Dependent Modified KodnerZelasko (MKZ) model, Response Spectra, Shear wave velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932