Search results for: shear rate protocol.
103 Production and Purification of Monosaccharides by Hydrolysis of Sugar Cane Bagasse in an Ionic Liquid Medium
Authors: T. R. Bandara, H. Jaelani, G. J. Griffin
Abstract:
The conversion of lignocellulosic waste materials, such as sugar cane bagasse, to biofuels such as ethanol has attracted significant interest as a potential element for transforming transport fuel supplies to totally renewable sources. However, the refractory nature of the cellulosic structure of lignocellulosic materials has impeded progress on developing an economic process, whereby the cellulose component may be effectively broken down to glucose monosaccharides and then purified to allow downstream fermentation. Ionic liquid (IL) treatment of lignocellulosic biomass has been shown to disrupt the crystalline structure of cellulose thus potentially enabling the cellulose to be more readily hydrolysed to monosaccharides. Furthermore, conventional hydrolysis of lignocellulosic materials yields byproducts that are inhibitors for efficient fermentation of the monosaccharides. However, selective extraction of monosaccharides from an aqueous/IL phase into an organic phase utilizing a combination of boronic acids and quaternary amines has shown promise as a purification process. Hydrolysis of sugar cane bagasse immersed in an aqueous solution with IL (1-ethyl-3-methylimidazolium acetate) was conducted at different pH and temperature below 100 ºC. It was found that the use of a high concentration of hydrochloric acid to acidify the solution inhibited the hydrolysis of bagasse. At high pH (i.e. basic conditions), using sodium hydroxide, catalyst yields were reduced for total reducing sugars (TRS) due to the rapid degradation of the sugars formed. For purification trials, a supported liquid membrane (SLM) apparatus was constructed, whereby a synthetic solution containing xylose and glucose in an aqueous IL phase was transported across a membrane impregnated with phenyl boronic acid/Aliquat 336 to an aqueous phase. The transport rate of xylose was generally higher than that of glucose indicating that a SLM scheme may not only be useful for purifying sugars from undesirable toxic compounds, but also for fractionating sugars to improve fermentation efficiency.
Keywords: Biomass, bagasse, hydrolysis, monosaccharide, supported liquid membrane, purification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360102 Performance Analysis of Three Absorption Heat Pump Cycles, Full and Partial Loads Operations
Authors: B. Dehghan, T. Toppi, M. Aprile, M. Motta
Abstract:
The environmental concerns related to global warming and ozone layer depletion along with the growing worldwide demand for heating and cooling have brought an increasing attention toward ecological and efficient Heating, Ventilation, and Air Conditioning (HVAC) systems. Furthermore, since space heating accounts for a considerable part of the European primary/final energy use, it has been identified as one of the sectors with the most challenging targets in energy use reduction. Heat pumps are commonly considered as a technology able to contribute to the achievement of the targets. Current research focuses on the full load operation and seasonal performance assessment of three gas-driven absorption heat pump cycles. To do this, investigations of the gas-driven air-source ammonia-water absorption heat pump systems for small-scale space heating applications are presented. For each of the presented cycles, both full-load under various temperature conditions and seasonal performances are predicted by means of numerical simulations. It has been considered that small capacity appliances are usually equipped with fixed geometry restrictors, meaning that the solution mass flow rate is driven by the pressure difference across the associated restrictor valve. Results show that gas utilization efficiency (GUE) of the cycles varies between 1.2 and 1.7 for both full and partial loads and vapor exchange (VX) cycle is found to achieve the highest efficiency. It is noticed that, for typical space heating applications, heat pumps operate over a wide range of capacities and thermal lifts. Thus, partially, the novelty introduced in the paper is the investigation based on a seasonal performance approach, following the method prescribed in a recent European standard (EN 12309). The overall result is a modest variation in the seasonal performance for analyzed cycles, from 1.427 (single-effect) to 1.493 (vapor-exchange).
Keywords: Absorption cycles, gas utilization efficiency, heat pump, seasonal performance, vapor exchange cycle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 721101 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model
Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier
Abstract:
Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.Keywords: Human Motion Recognition, Motion representation, Laban Movement Analysis, Discrete Hidden Markov Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733100 Transcriptomics Analysis on Comparing Non-Small Cell Lung Cancer versus Normal Lung, and Early Stage Compared versus Late-Stages of Non-Small Cell Lung Cancer
Authors: Achitphol Chookaew, Paramee Thongsukhsai, Patamarerk Engsontia, Narongwit Nakwan, Pritsana Raugrut
Abstract:
Lung cancer is one of the most common malignancies and primary cause of death due to cancer worldwide. Non-small cell lung cancer (NSCLC) is the main subtype in which majority of patients present with advanced-stage disease. Herein, we analyzed differentially expressed genes to find potential biomarkers for lung cancer diagnosis as well as prognostic markers. We used transcriptome data from our 2 NSCLC patients and public data (GSE81089) composing of 8 NSCLC and 10 normal lung tissues. Differentially expressed genes (DEGs) between NSCLC and normal tissue and between early-stage and late-stage NSCLC were analyzed by the DESeq2. Pairwise correlation was used to find the DEGs with false discovery rate (FDR) adjusted p-value £ 0.05 and |log2 fold change| ³ 4 for NSCLC versus normal and FDR adjusted p-value £ 0.05 with |log2 fold change| ³ 2 for early versus late-stage NSCLC. Bioinformatic tools were used for functional and pathway analysis. Moreover, the top ten genes in each comparison group were verified the expression and survival analysis via GEPIA. We found 150 up-regulated and 45 down-regulated genes in NSCLC compared to normal tissues. Many immnunoglobulin-related genes e.g., IGHV4-4, IGHV5-10-1, IGHV4-31, IGHV4-61, and IGHV1-69D were significantly up-regulated. 22 genes were up-regulated, and five genes were down-regulated in late-stage compared to early-stage NSCLC. The top five DEGs genes were KRT6B, SPRR1A, KRT13, KRT6A and KRT5. Keratin 6B (KRT6B) was the most significantly increased gene in the late-stage NSCLC. From GEPIA analysis, we concluded that IGHV4-31 and IGKV1-9 might be used as diagnostic biomarkers, while KRT6B and KRT6A might be used as prognostic biomarkers. However, further clinical validation is needed.Keywords: Bioinformatics, differentially expressed genes, non-small cell lung cancer, transcriptomics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90899 Production of Pre-Reduction of Iron Ore Nuggets with Lesser Sulphur Intake by Devolatisation of Boiler Grade Coal
Authors: Chanchal Biswas, Anrin Bhattacharyya, Gopes Chandra Das, Mahua Ghosh Chaudhuri, Rajib Dey
Abstract:
Boiler coals with low fixed carbon and higher ash content have always challenged the metallurgists to develop a suitable method for their utilization. In the present study, an attempt is made to establish an energy effective method for the reduction of iron ore fines in the form of nuggets by using ‘Syngas’. By devolatisation (expulsion of volatile matter by applying heat) of boiler coal, gaseous product (enriched with reducing agents like CO, CO2, H2, and CH4 gases) is generated. Iron ore nuggets are reduced by this syngas. For that reason, there is no direct contact between iron ore nuggets and coal ash. It helps to control the minimization of the sulphur intake of the reduced nuggets. A laboratory scale devolatisation furnace designed with reduction facility is evaluated after in-depth studies and exhaustive experimentations including thermo-gravimetric (TG-DTA) analysis to find out the volatile fraction present in boiler grade coal, gas chromatography (GC) to find out syngas composition in different temperature and furnace temperature gradient measurements to minimize the furnace cost by applying one heating coil. The nuggets are reduced in the devolatisation furnace at three different temperatures and three different times. The pre-reduced nuggets are subjected to analytical weight loss calculations to evaluate the extent of reduction. The phase and surface morphology analysis of pre-reduced samples are characterized using X-ray diffractometry (XRD), energy dispersive x-ray spectrometry (EDX), scanning electron microscopy (SEM), carbon sulphur analyzer and chemical analysis method. Degree of metallization of the reduced nuggets is 78.9% by using boiler grade coal. The pre-reduced nuggets with lesser sulphur content could be used in the blast furnace as raw materials or coolant which would reduce the high quality of coke rate of the furnace due to its pre-reduced character. These can be used in Basic Oxygen Furnace (BOF) as coolant also.Keywords: Alternative ironmaking, coal devolatisation, extent of reduction, nugget making, syngas based DRI, solid state reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148998 Energy Loss Reduction in Oil Refineries through Flare Gas Recovery Approaches
Authors: Majid Amidpour, Parisa Karimi, Marzieh Joda
Abstract:
For the last few years, release of burned undesirable by-products has become a challenging issue in oil industries. Flaring, as one of the main sources of air contamination, involves detrimental and long-lasting effects on human health and is considered a substantial reason for energy losses worldwide. This research involves studying the implications of two main flare gas recovery methods at three oil refineries, all in Iran as the case I, case II, and case III in which the production capacities are increasing respectively. In the proposed methods, flare gases are converted into more valuable products, before combustion by the flare networks. The first approach involves collecting, compressing and converting the flare gas to smokeless fuel which can be used in the fuel gas system of the refineries. The other scenario includes utilizing the flare gas as a feed into liquefied petroleum gas (LPG) production unit already established in the refineries. The processes of these scenarios are simulated, and the capital investment is calculated for each procedure. The cumulative profits of the scenarios are evaluated using Net Present Value method. Furthermore, the sensitivity analysis based on total propane and butane mole fraction is carried out to make a rational comparison for LPG production approach, and the results are illustrated for different mole fractions of propane and butane. As the mole fraction of propane and butane contained in LPG differs in summer and winter seasons, the results corresponding to LPG scenario are demonstrated for each season. The results of the simulations show that cumulative profit in fuel gas production scenario and LPG production rate increase with the capacity of the refineries. Moreover, the investment return time in LPG production method experiences a decline, followed by a rising trend with an increase in C3 and C4 content. The minimum value of time return occurs at propane and butane sum concentration values of 0.7, 0.6, and 0.7 in case I, II, and III, respectively. Based on comparison of the time of investment return and cumulative profit, fuel gas production is the superior scenario for three case studies.
Keywords: Flare gas reduction, liquefied petroleum gas, fuel gas, net present value method, sensitivity analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77997 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies
Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong
Abstract:
To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.Keywords: Travel characteristics analysis, transportation choice, travel sharing rate, neural network model, traffic resource allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 62696 Catalytic Gasification of Olive Mill Wastewater as a Biomass Source under Supercritical Conditions
Authors: Ekin Kıpçak, Mesut Akgün
Abstract:
Recently, a growing interest has emerged on the development of new and efficient energy sources, due to the inevitable extinction of the nonrenewable energy reserves. One of these alternative sources which have a great potential and sustainability to meet up the energy demand is biomass energy. This significant energy source can be utilized with various energy conversion technologies, one of which is biomass gasification in supercritical water.
Water, being the most important solvent in nature, has very important characteristics as a reaction solvent under supercritical circumstances. At temperatures above its critical point (374.8oC and 22.1MPa), water becomes more acidic and its diffusivity increases. Working with water at high temperatures increases the thermal reaction rate, which in consequence leads to a better dissolving of the organic matters and a fast reaction with oxygen. Hence, supercritical water offers a control mechanism depending on solubility, excellent transport properties based on its high diffusion ability and new reaction possibilities for hydrolysis or oxidation.
In this study the gasification of a real biomass, namely olive mill wastewater (OMW), in supercritical water conditions is investigated with the use of Ru/Al2O3 catalyst. OMW is a by-product obtained during olive oil production, which has a complex nature characterized by a high content of organic compounds and polyphenols. These properties impose OMW a significant pollution potential, but at the same time, the high content of organics makes OMW a desirable biomass candidate for energy production.
The catalytic gasification experiments were made with five different reaction temperatures (400, 450, 500, 550 and 600°C) and five reaction times (30, 60, 90, 120 and 150s), under a constant pressure of 25MPa. Through these experiments, the effects of reaction temperature and time on the gasification yield, gaseous product composition and OMW treatment efficiency were investigated.
Keywords: Catalyst, Gasification, Olive mill wastewater, Ru/Al2O3, Supercritical water.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 228195 Stakeholder Analysis: Who are the Key Actorsin Establishing and Developing Thai Independent Consumer Organizations?
Authors: P. Ondee, S. Pannarunothai
Abstract:
In Thailand, both the 1997 and the current 2007 Thai Constitutions have mentioned the establishment of independent organizations as a new mechanism to play a key role in proposing policy recommendations to national decision-makers in the interest of collective consumers. Over the last ten years, no independent organizations have yet been set up. Evidently, nobody could point out who should be key players in establishing provincial independent consumer bodies. The purpose of this study was to find definitive stakeholders in establishing and developing independent consumer bodies in a Thai context. This was a cross-sectional study between August and September 2007, using a postal questionnaire with telephone follow-up. The questionnaire was designed and used to obtain multiple stakeholder assessment of three key attributes (power, interest and influence). Study population was 153 stakeholders associated with policy decision-making, formulation and implementation processes of civil-based consumer protection in pilot provinces. The population covered key representatives from five sectors (academics, government officers, business traders, mass media and consumer networks) who participated in the deliberative forums at 10 provinces. A 49.7% response rate was achieved. Data were analyzed, comparing means of three stakeholder attributes and classification of stakeholder typology. The results showed that the provincial health officers were the definitive stakeholders as they had legal power, influence and interest in establishing and sustaining the independent consumer bodies. However, only a few key representatives of the provincial health officers expressed their own paradigm on the civil-based consumer protection. Most provincial health officers put their own standpoint of building civic participation at only a plan-implementation level. For effective policy implementation by the independent consumer bodies, the Thai government should provide budgetary support for the operation of the provincial health officers with their paradigm shift as well as their own clarified standpoint on corporate governance.
Keywords: Civic participation, civil society, consumerprotection, independent organization, policy decision-making, stakeholder analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 194694 Promoting Social Advocacy through Digital Storytelling: The Case of Ocean Acidification
Authors: Chun Chen Yea, Wen Huei Chou
Abstract:
Many chemical changes in the atmosphere and the ocean are invisible to the naked eye, but they have profound impacts. These changes not only confirm the phenomenon of global carbon pollution, but also forewarn that more changes are coming. The carbon dioxide gases emitted from the burning of fossil fuels dissolve into the ocean and chemically react with seawater to form carbonic acid, which increases the acidity of the originally alkaline seawater. This gradual acidification is occurring at an unprecedented rate and will affect the effective formation of carapace of some marine organisms such as corals and crustaceans, which are almost entirely composed of calcium carbonate. The carapace of these organisms will become more dissoluble. Acidified seawater not only threatens the survival of marine life, but also negatively impacts the global ecosystem via the food chain. Faced with the threat of ocean acidification, all humans are duty-bound. The industrial sector outputs the highest level of carbon dioxide emissions in Taiwan, and the petrochemical industry is the major contributor. Ever since the construction of Formosa Plastics Group's No. 6 Naphtha Cracker Plant in Yunlin County, there have been many environmental concerns such as air pollution and carbon dioxide emission. The marine life along the coast of Yunlin is directly affected by ocean acidification arising from the carbon emissions. Societal change demands our willingness to act, which is what social advocacy promotes. This study uses digital storytelling for social advocacy and ocean acidification as the subject of a visual narrative in visualization to demonstrate the subsequent promotion of social advocacy. Storytelling can transform dull knowledge into an engaging narrative of the crisis faced by marine life. Digital dissemination is an effective social-work practice. The visualization promoting awareness on ocean acidification disseminated via social media platforms, such as Facebook and Instagram. Social media enables users to compose their own messages and share information across different platforms, which helps disseminate the core message of social advocacy.
Keywords: Digital storytelling, visualization, ocean acidification, social advocacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 95593 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications
Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso
Abstract:
The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.
Keywords: Interferometry, MIMO RADAR, SAR, tomography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91792 On-Line Geometrical Identification of Reconfigurable Machine Tool using Virtual Machining
Authors: Alexandru Epureanu, Virgil Teodor
Abstract:
One of the main research directions in CAD/CAM machining area is the reducing of machining time. The feedrate scheduling is one of the advanced techniques that allows keeping constant the uncut chip area and as sequel to keep constant the main cutting force. They are two main ways for feedrate optimization. The first consists in the cutting force monitoring, which presumes to use complex equipment for the force measurement and after this, to set the feedrate regarding the cutting force variation. The second way is to optimize the feedrate by keeping constant the material removal rate regarding the cutting conditions. In this paper there is proposed a new approach using an extended database that replaces the system model. The feedrate scheduling is determined based on the identification of the reconfigurable machine tool, and the feed value determination regarding the uncut chip section area, the contact length between tool and blank and also regarding the geometrical roughness. The first stage consists in the blank and tool monitoring for the determination of actual profiles. The next stage is the determination of programmed tool path that allows obtaining the piece target profile. The graphic representation environment models the tool and blank regions and, after this, the tool model is positioned regarding the blank model according to the programmed tool path. For each of these positions the geometrical roughness value, the uncut chip area and the contact length between tool and blank are calculated. Each of these parameters are compared with the admissible values and according to the result the feed value is established. We can consider that this approach has the following advantages: in case of complex cutting processes the prediction of cutting force is possible; there is considered the real cutting profile which has deviations from the theoretical profile; the blank-tool contact length limitation is possible; it is possible to correct the programmed tool path so that the target profile can be obtained. Applying this method, there are obtained data sets which allow the feedrate scheduling so that the uncut chip area is constant and, as a result, the cutting force is constant, which allows to use more efficiently the machine tool and to obtain the reduction of machining time.Keywords: Reconfigurable machine tool, system identification, uncut chip area, cutting conditions scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 145291 Trade Policy Incentives and Economic Growth in Nigeria
Authors: Emmanuel Dele Balogun
Abstract:
This paper analyzes, using descriptive statistics and econometrics data which span the period 1981 to 2014 to gauge the effects of trade policy incentives on economic growth in Nigeria. It argues that the provided incentives penalize economic growth during pre-trade liberalization eras, but stimulated a rapid increase in total factor productivity during the post-liberalization period of 2000 to 2014. The trend analysis shows that Nigeria maintained high tariff walls in economic regulation eras which became low in post liberalization era. The protections were in favor of infant industries, which were mainly appendages of multinationals but against imports of competing food and finished consumer products. The trade openness index confirms the undue exposure of Nigeria’s economy to the vagaries of international market shocks; while banking sector recapitalization and new listing of telecommunications companies deepened the financial markets in post-liberalization era. The structure of economic incentives was biased in favor of construction, trade and services, but against the real sector despite protectionist policies. Total Factor Productivity (TFP) estimates show that the Nigerian economy suffered stagnation in pre-liberalization eras, but experienced rapid growth rates in post-liberalization eras. The regression results relating trade policy incentives to TFP growth rate yielded a significant but negative intercept suggesting that a non-interventionist policy could be detrimental to economic progress, while protective tariff which limits imports of competing products could spur productivity gains in domestic import substitutes beyond factor growth with market liberalization. The main constraint to the effectiveness of trade policy incentives is the failure of benefiting industries to leverage on the domestic factor endowments of the nation. This paper concludes that there is the need to review the current economic transformation strategies urgently with a view to provide policymakers with a better understanding of the most viable options that could make for rapid success.
Keywords: Trade Policies, macroeconomic incentives, total factor productivity and economic growth.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 160090 A Comparative Understanding of Critical Problems Faced by Pakistani and Indian Transportation Industry
Authors: Saleh Abduallah Saleh, Mohammad Basir Bin Saud, Mohd Azwardi Md Isa
Abstract:
It is very important for a developing nation to developing their infrastructure on the prime priority because their infrastructure particularly their roads and transportation functions as a blood in the system. Almost 1.1 billion populations share the travel and transportation industry in India. On the other hand, the Pakistan transportation industry is also extensive and elevating about 170 million users of transportation. Indian and Pakistani specifically within bus industry are well connected within and between the urban and rural areas. The transportation industry is radically helping the economic alleviation of both countries. Due to high economic instability, unemployment and poverty rate both countries governments are very serious and committed to help for boosting their economy. They believe that any form of transportation development would play a vital role in the development of land, infrastructure which could indirectly support many other industries’ developments, such as tourism, freighting and shipping businesses, just to mention a few. However, it seems that their previous transportation planning in the due course has failed to meet the fast growing demand. As with the span of time, both the countries are looking forward to a long-term, and economical solutions, because the demand is from time to time keep appreciating and reacting according to other key economic drivers. Content analysis method and case study approach is used in this paper and secondary data from the bureau of statistic is used for case analysis. The paper focused on the mobility concerns of the lower and middle-income people in India and Pakistan. The paper is aimed to highlight the weaknesses, opportunities and limitations resulting from low priority industry for a government, which is making the either country's public suffer. The paper has concluded that the main issue is identified as the slow, inappropriate, and unfavorable decisions which are not in favor of long-term country’s economic development and public interest. The paper also recommends to future research avenues for public and private transportation, which is continuously failing to meet the public expectations.
Keywords: Bus transportation industries, transportation demand, government parallel initiatives, road and traffic congestions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 191489 Exergetic Optimization on Solid Oxide Fuel Cell Systems
Authors: George N. Prodromidis, Frank A. Coutelieris
Abstract:
Biogas can be currently considered as an alternative option for electricity production, mainly due to its high energy content (hydrocarbon-rich source), its renewable status and its relatively low utilization cost. Solid Oxide Fuel Cell (SOFC) stacks convert fuel’s chemical energy to electricity with high efficiencies and reveal significant advantages on fuel flexibility combined with lower emissions rate, especially when utilize biogas. Electricity production by biogas constitutes a composite problem which incorporates an extensive parametric analysis on numerous dynamic variables. The main scope of the presented study is to propose a detailed thermodynamic model on the optimization of SOFC-based power plants’ operation based on fundamental thermodynamics, energy and exergy balances. This model named THERMAS (THERmodynamic MAthematical Simulation model) incorporates each individual process, during electricity production, mathematically simulated for different case studies that represent real life operational conditions. Also, THERMAS offers the opportunity to choose a great variety of different values for each operational parameter individually, thus allowing for studies within unexplored and experimentally impossible operational ranges. Finally, THERMAS innovatively incorporates a specific criterion concluded by the extensive energy analysis to identify the most optimal scenario per simulated system in exergy terms. Therefore, several dynamical parameters as well as several biogas mixture compositions have been taken into account, to cover all the possible incidents. Towards the optimization process in terms of an innovative OPF (OPtimization Factor), presented here, this research study reveals that systems supplied by low methane fuels can be comparable to these supplied by pure methane. To conclude, such an innovative simulation model indicates a perspective on the optimal design of a SOFC stack based system, in the direction of the commercialization of systems utilizing biogas.
Keywords: Biogas, Exergy, Optimization, SOFC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120488 Interruption Overload in an Office Environment: Hungarian Survey Focusing on the Factors that Affect Job Satisfaction and Work Efficiency
Authors: Fruzsina Pataki-Bittó, Edit Németh
Abstract:
On the one hand, new technologies and communication tools improve employee productivity and accelerate information and knowledge transfer, while on the other hand, information overload and continuous interruptions make it even harder to concentrate at work. It is a great challenge for companies to find the right balance, while there is also an ongoing demand to recruit and retain the talented employees who are able to adopt the modern work style and effectively use modern communication tools. For this reason, this research does not focus on the objective measures of office interruptions, but aims to find those disruption factors which influence the comfort and job satisfaction of employees, and the way how they feel generally at work. The focus of this research is on how employees feel about the different types of interruptions, which are those they themselves identify as hindering factors, and those they feel as stress factors. By identifying and then reducing these destructive factors, job satisfaction can reach a higher level and employee turnover can be reduced. During the research, we collected information from depth interviews and questionnaires asking about work environment, communication channels used in the workplace, individual communication preferences, factors considered as disruptions, and individual steps taken to avoid interruptions. The questionnaire was completed by 141 office workers from several types of workplaces based in Hungary. Even though 66 respondents are working at Hungarian offices of multinational companies, the research is about the characteristics of the Hungarian labor force. The most important result of the research shows that while more than one third of the respondents consider office noise as a disturbing factor, personal inquiries are welcome and considered useful, even if in such cases the work environment will not be convenient to solve tasks requiring concentration. Analyzing the sizes of the offices, in an open-space environment, the rate of those who consider office noise as a disturbing factor is surprisingly lower than in smaller office rooms. Opinions are more diverse regarding information communication technologies. In addition to the interruption factors affecting the employees' job satisfaction, the research also focuses on the role of the offices in the 21st century.
Keywords: Information overload, interruption, job satisfaction, office environment, work efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98387 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices
Authors: Pratik Dhabal Deo, Manoj P.
Abstract:
With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of video quality assessment in since the past years and more research on various other aspects of video and image are being done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective Video Quality Analysis (VQA) metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and Android smartphone, an iOS smartphone and a Digital Single-Lens Reflex (DSLR) camera. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied in addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics did not perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using High Efficiency Video Coding (HEVC) codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, Structural Similarity (SSIM) metric and Video Multimethod Assessment Fusion (VMAF) have performed significantly better.
Keywords: Distortion, metrics, recording, frame rate, video quality assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37386 Development and Analysis of a Machine to Equally Apply Mineral Fertilizer to Soil on Slopes
Authors: Qurbanov Huseyn Nuraddin
Abstract:
Reliable food supply of the population of a country is one of the main directions of the state's economic policy. Grain growing, which is the basis of agriculture, is important in this area. In the cultivation of cereals on slopes, the application of equal amounts of mineral fertilizers to under the soil before sowing is a very important technological process. The low level of technical equipment in this area prevents producers from providing the country with the necessary quality cereals. Experience in the operation of modern technical means has shown that at present, there is a need to provide an equal amount of fertilizer to under the soil on slopes, fully meeting the agro-technical requirements. No fundamental changes have been made to the industrial machines that fertilize under the soil, and unequal application of fertilizers to under the soil on slopes has been applied. This technological process leads to the destruction of new seedlings and reduced productivity due to intolerance to frost during the winter for the plant planted in the fall. In special climatic conditions, there is an optimal fertilization rate for each agricultural product. The application of fertilizers to the soil is one of the conditions that increase their efficiency in the field. As can be seen, the development of a new technical proposal for fertilizing and plowing the slopes in equal amounts on the slopes, improving the technological and design parameters, taking into account the physical and mechanical properties of fertilizers, is very important. Taking into account the above-mentioned issues, a combined plough was developed in our laboratory. Combined plough carries out pre-sowing technological operation in the cultivation of cereals, providing a smooth equal amount of mineral fertilizers to under the soil on the slopes. Mathematical models of a smooth spreader that evenly distributes fertilizers in the field have been developed. Thus, diagrams and graphs obtained without distribution on the eight partitions of the smooth spreader are constructed under the inclined angles of the slopes. Percentage and productivity of equal distribution in the field were noted by practical and theoretical analysis.
Keywords: Combined plough, mineral fertilizer, equal sowing, fertilizer norm, grain-crops, sowing fertilizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39785 The Study of Cost Accounting in S Company Based On TDABC
Authors: Heng Ma
Abstract:
Third-party warehousing logistics has an important role in the development of external logistics. At present, the third-party logistics in our country is still a new industry, the accounting system has not yet been established, the current financial accounting system of third-party warehousing logistics is mainly in the traditional way of thinking, and only able to provide the total cost information of the entire enterprise during the accounting period, unable to reflect operating indirect cost information. In order to solve the problem of third-party logistics industry cost information distortion, improve the level of logistics cost management, the paper combines theoretical research and case analysis method to reflect cost allocation by building third-party logistics costing model using Time-Driven Activity-Based Costing(TDABC), and takes S company as an example to account and control the warehousing logistics cost.Based on the idea of “Products consume activities and activities consume resources”, TDABC put time into the main cost driver and use time-consuming equation resources assigned to cost objects. In S company, the objects focuses on three warehouse, engaged with warehousing and transportation (the second warehouse, transport point) service. These three warehouse respectively including five departments, Business Unit, Production Unit, Settlement Center, Security Department and Equipment Division, the activities in these departments are classified by in-out of storage forecast, in-out of storage or transit and safekeeping work. By computing capacity cost rate, building the time-consuming equation, the paper calculates the final operation cost so as to reveal the real cost.The numerical analysis results show that the TDABC can accurately reflect the cost allocation of service customers and reveal the spare capacity cost of resource center, verifies the feasibility and validity of TDABC in third-party logistics industry cost accounting. It inspires enterprises focus on customer relationship management and reduces idle cost to strengthen the cost management of third-party logistics enterprises.
Keywords: Third-party logistics enterprises, TDABC, cost management, S company.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 243884 Sustainable Hydrogel Nanocomposites Based on Grafted Chitosan and Clay for Effective Adsorption of Cationic Dye
Authors: H. Ferfera-Harrar, T. Benhalima, D. Lerari
Abstract:
Contamination of water, due to the discharge of untreated industrial wastewaters into the ecosystem, has become a serious problem for many countries. In this study, bioadsorbents based on chitosan-g-poly(acrylamide) and montmorillonite (MMt) clay (CTS-g-PAAm/MMt) hydrogel nanocomposites were prepared via free‐radical grafting copolymerization and crosslinking of acrylamide monomer (AAm) onto natural polysaccharide chitosan (CTS) as backbone, in presence of various contents of MMt clay as nanofiller. Then, they were hydrolyzed to obtain highly functionalized pH‐sensitive nanomaterials with uppermost swelling properties. Their structure characterization was conducted by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) analyses. The adsorption performances of the developed nanohybrids were examined for removal of methylene blue (MB) cationic dye from aqueous solutions. The factors affecting the removal of MB, such as clay content, pH medium, adsorbent dose, initial dye concentration and temperature were explored. The adsorption process was found to be highly pH dependent. From adsorption kinetic results, the prepared adsorbents showed remarkable adsorption capacity and fast adsorption rate, mainly more than 88% of MB removal efficiency was reached after 50 min in 200 mg L-1 of dye solution. In addition, the incorporating of various content of clay has enhanced adsorption capacity of CTS-g-PAAm matrix from 1685 to a highest value of 1749 mg g-1 for the optimized nanocomposite containing 2 wt.% of MMt. The experimental kinetic data were well described by the pseudo-second-order model, while the equilibrium data were represented perfectly by Langmuir isotherm model. The maximum Langmuir equilibrium adsorption capacity (qm) was found to increase from 2173 mg g−1 until 2221 mg g−1 by adding 2 wt.% of clay nanofiller. Thermodynamic parameters revealed the spontaneous and endothermic nature of the process. In addition, the reusability study revealed that these bioadsorbents could be well regenerated with desorption efficiency overhead 87% and without any obvious decrease of removal efficiency as compared to starting ones even after four consecutive adsorption/desorption cycles, which exceeded 64%. These results suggest that the optimized nanocomposites are promising as low cost bioadsorbents.
Keywords: Chitosan, clay, dye adsorption, hydrogels nanocomposites.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 102083 Stochastic Simulation of Reaction-Diffusion Systems
Authors: Paola Lecca, Lorenzo Dematte
Abstract:
Reactiondiffusion systems are mathematical models that describe how the concentration of one or more substances distributed in space changes under the influence of local chemical reactions in which the substances are converted into each other, and diffusion which causes the substances to spread out in space. The classical representation of a reaction-diffusion system is given by semi-linear parabolic partial differential equations, whose general form is ÔêétX(x, t) = DΔX(x, t), where X(x, t) is the state vector, D is the matrix of the diffusion coefficients and Δ is the Laplace operator. If the solute move in an homogeneous system in thermal equilibrium, the diffusion coefficients are constants that do not depend on the local concentration of solvent and of solutes and on local temperature of the medium. In this paper a new stochastic reaction-diffusion model in which the diffusion coefficients are function of the local concentration, viscosity and frictional forces of solvent and solute is presented. Such a model provides a more realistic description of the molecular kinetics in non-homogenoeus and highly structured media as the intra- and inter-cellular spaces. The movement of a molecule A from a region i to a region j of the space is described as a first order reaction Ai k- → Aj , where the rate constant k depends on the diffusion coefficient. Representing the diffusional motion as a chemical reaction allows to assimilate a reaction-diffusion system to a pure reaction system and to simulate it with Gillespie-inspired stochastic simulation algorithms. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the specific speed of reaction and diffusion events. Redi is the software tool, developed to implement the model of reaction-diffusion kinetics and dynamics. It is a free software, that can be downloaded from http://www.cosbi.eu. To demonstrate the validity of the new reaction-diffusion model, the simulation results of the chaperone-assisted protein folding in cytoplasm obtained with Redi are reported. This case study is redrawing the attention of the scientific community due to current interests on protein aggregation as a potential cause for neurodegenerative diseases.
Keywords: Reaction-diffusion systems, Fick's law, stochastic simulation algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174382 Gammarus:Asellus Ratio as an Index of Organic Pollution – (A Case Study in Markeaton, Kedleston Hall, and Allestree Park Lakes Derby) UK
Authors: U. Bawa
Abstract:
Macro invertebrates have been used to monitor organic pollution in rivers and streams. Several biotic indices based on macro invertebrates have been developed over the years including the Biological Monitoring Working Party (BMWP). A new biotic index, the Gammarus:Asellus ratio has been recently proposed as an index of organic pollution. This study tested the validity of the Gammarus:Asellus ratio as an index of organic pollution, by examining the relationship between the Gammarus:Asellus ratio and physical chemical parameters, and other biotic indices such as BMWP and, Average Score Per Taxon (ASPT) from lakes and streams at Markeaton Park, Allestree Park and Kedleston Hall, Derbyshire. Macro invertebrates were sampled using the standard five minute kick sampling techniques physical and chemical environmental variables were obtained based on standard sampling techniques. Eighteen sites were sampled, six sites from Markeaton Park (three sites across the stream and three sites across the lake). Six sites each were also sampled from Allestree Park and Kedleston Hall lakes. The Gammarus:Asellus ratio showed an opposite significant positive correlations with parameters indicative of organic pollution such as the level of nitrates, phosphates, and calcium and also revealed a negatively significant correlations with other biotic indices (BMWP/ASPT). The BMWP score correlated positively significantly with some water quality parameters such as dissolved oxygen and flow rate, but revealed no correlations with other chemical environmental variables. The BMWP score was significantly higher in the stream than the lake in Markeaton Park, also The ASPT scores appear to be significantly higher in the upper Lakes than the middle and lower lakes. This study has further strengthened the use of BMWP/ASPT score as an index of organic pollution. But additional application is required to validate the use of Gammarus:Asellus as a rapid bio monitoring tool.
Keywords: Asellus, Biotic index, Gammarus, Organic pollution, Macro invertebrate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 292081 Angiographic Evaluation of ETT (Treadmill) Positive Patients in a Tertiary Care Hospital of Bangladesh
Authors: Syed Dawood Md. Taimur, Saidur Rahman Khan, Farzana Islam
Abstract:
To evaluate the factors which predetermine the coronary artery disease in patients having positive Exercise Tolerance Test (ETT) that is treadmill results and coronary artery findings. This descriptive study was conducted at Department of Cardiology, Ibrahim Cardiac Hospital & Research Institute, Dhaka, Bangladesh from 1st January, 2014 to 31st August, 2014. All patients who had done ETT (treadmill) for chest pain diagnosis were studied. One hundred and four patients underwent coronary angiogram after positive treadmill result. Patients were divided into two groups depending upon the angiographic findings, i.e. true positive and false positive. Positive treadmill test patients who have coronary artery involvement these are called true positive and who have no involvement they are called false positive group. Both groups were compared with each other. Out of 104 patients, 81 (77.9%) patients had true positive ETT and 23 (22.1%) patients had false positive ETT. The mean age of patients in positive ETT was 53.46± 8.06 years and male mean age was 53.63±8.36 years and female was 52.87±7.0 years. Sixty nine (85.19%) male patients and twelve (14.81%) female patients had true positive ETT, whereas 15 (65.21%) males and 8 (34.79%) females had false positive ETT, this was statistically significant (p<0.032) in the two groups (sex) in comparison of true and false positive ETT. The risk factors of these patients like diabetes mellitus, hypertension, dyslipidemia, family history and smoking were seen among these patients. Hypertensive patients having true positive which were statistically significant (p<0.004) and diabetic, dyslipidemic patients having true positive which were statistically significant (p<0.032 & 0.030).True positive patients had family history were 68(83.95%) and smoking were 52 (64.20%), where family history patients had statistically significant (p<0.017) between two groups of patients and smokers were significant (p<0.012). 46 true positive patients achieved THR which was not statistically significant (P<0.138) and 79 true patients had abnormal resting ECG whether it was significant (p<0.036). Amongst the vessels involvement the most common was LAD 55 (67.90 %) followed by LCX 42 (51.85%), RCA 36 (44.44%), and the LMCA was 9 (11.11%). 40 patients (49.38%) had SVD, 26 (30.10%) had DVD, 15(18.52%) had TVD and 23 had normal coronary arteries. It can be concluded that among the female patients who have positive ETT with normal resting ECG, who had achieved target heart rate are likely to have a false positive test result. Conversely male patients, resting abnormal ECG who had not achieved THR, symptom limited ETT, have a hypertension, diabetes, dyslipidemia, family history and smoking are likely to have a true positive treadmill test result.
Keywords: Exercise tolerance test, Coronary artery disease, Coronary angiography, True positive, False positive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 395980 The Potential of ‘Comprehensive Assessment System for Built Environment Efficiency for Cities’ in Developing Country: Evidence of Myanmar
Authors: Theingi Shwe, Riken Homma, Kazuhisa Iki, Juko Ito
Abstract:
The growing cities of the developing country are characterized by rapid growth and poor infrastructure management inviting and accelerating relative environmental problems. Even though the movements of the sustainability had already been developed around the world, it is still increasing in the developing countries to plant sustainable practices. Aligned with the sustainable development actions, many sustainable assessment tools are also developed to rate and evaluate the sustainability performances through the building to community level. Among them, CASBEE is developed by Japanese organizations and is recognized as one of the international well-known assessment tools. The main purpose of the study is to find out the potential of CASBEE tool reflecting sustainability city level performances in developing countries. The research framework was designed with three major phases: Quantitative Approach, Qualitative Approach and Evaluation Reflection. The first two approaches were based on the investigation of tool’s contents and indicators by means of three sustainable dimensions and sustainability categories. To know the reality and reflection on developing country, Pathein City from Myanmar was selected and evaluated by 2012 version of CASBEE for Cities. The evaluation practices went through assigned indicators and the evaluation outcome presents the performances of Pathein city’s environmental efficiency as a very good in current conditions. The results of this study indicate that the indicators of this tool have balance coverage among three dimensions of sustainability but it has not yet counted enough for some indicators like location, infrastructure and institution which are relative to society dimension. In the developing countries’ cities, the most critical issues on development such as affordable housing and heritage preservation which are already planted in Pathein City but the tool does not account for those issues. Moreover, in some of the indicators, the benchmark and the weighting coefficient are strongly linked to the system birth region. By means of this study, it can be stated that CASBEE for Cities would be potential for delivering sustainable city level development in developing country especially in Myanmar along with further inclusion of the indicators.
Keywords: Assessment tool, CASBEE, developing country, Myanmar, Pathein city, sustainable development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 111879 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance
Authors: Rajinder Singh, Ram Valluru
Abstract:
Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.
Keywords: Actuarial loss reserving techniques, logistic regression, parametric function, volatility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41978 Critical Success Factors Influencing Construction Project Performance for Different Objectives: Procurement Phase
Authors: Samart Homthong, Wutthipong Moungnoi
Abstract:
Critical success factors (CSFs) and the criteria to measure project success have received much attention over the decades and are among the most widely researched topics in the context of project management. However, although there have been extensive studies on the subject by different researchers, to date, there has been little agreement on the CSFs. The aim of this study is to identify the CSFs that influence the performance of construction projects, and determine their relative importance for different objectives across five stages in the project life cycle. A considerable literature review was conducted that resulted in the identification of 179 individual factors. These factors were then grouped into nine major categories. A questionnaire survey was used to collect data from three groups of respondents: client representatives, consultants, and contractors. Out of 164 questionnaires distributed, 93 were returned, yielding a response rate of 56.7%. Using the mean score, relative importance index, and weighted average method, the top 10 critical factors for each category were identified. The agreement of survey respondents on those categorised factors were analysed using Spearman’s rank correlation. A one-way analysis of variance was then performed to determine whether the mean scores among the various groups of respondents were statistically significant. The findings indicate the most CSFs in each category in procurement phase are: proper procurement programming of materials (time), stability in the price of materials (cost), and determining quality in the construction (quality). They are then followed by safety equipment acquisition and maintenance (health and safety), budgeting allowed in a contractual arrangement for implementing environmental management activities (environment), completeness of drawing documents (productivity), accurate measurement and pricing of bill of quantities (risk management), adequate communication among the project team (human resource), and adequate cost control measures (client satisfaction). An understanding of CSFs would help all interested parties in the construction industry to improve project performance. Furthermore, the results of this study would help construction professionals and practitioners take proactive measures for effective project management.
Keywords: Critical success factors, procurement phase, project life cycle, project performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 224477 Influence of Hydrocarbons on Plant Cell Ultrastructure and Main Metabolic Enzymes
Authors: T. Sadunishvili, E. Kvesitadze, M. Betsiashvili, N. Kuprava, G. Zaalishvili, G. Kvesitadze
Abstract:
Influence of octane and benzene on plant cell ultrastructure and enzymes of basic metabolism, such as nitrogen assimilation and energy generation have been studied. Different plants: perennial ryegrass (Lolium perenne) and alfalfa (Medicago sativa); crops- maize (Zea mays L.) and bean (Phaseolus vulgaris); shrubs – privet (Ligustrum sempervirens) and trifoliate orange (Poncirus trifoliate); trees - poplar (Populus deltoides) and white mulberry (Morus alba L.) were exposed to hydrocarbons of different concentrations (1, 10 and 100 mM). Destructive changes in bean and maize leaves cells ultrastructure under the influence of benzene vapour were revealed at the level of photosynthetic and energy generation subcellular organells. Different deviations at the level of subcellular organelles structure and distribution were observed in alfalfa and ryegrass root cells under the influence of benzene and octane, absorbed through roots. The level of destructive changes is concentration dependent. Benzene at low 1 and 10 mM concentration caused the increase in glutamate dehydrogenase (GDH) activity in maize roots and leaves and in poplar and mulberry shoots, though to higher extent in case of lower, 1mM concentration. The induction was more intensive in plant roots. The highest tested 100mM concentration of benzene was inhibitory to the enzyme in all plants. Octane caused induction of GDH in all grassy plants at all tested concentrations; however the rate of induction decreased parallel to increase of the hydrocarbon concentration. Octane at concentration 1 mM caused induction of GDH in privet, trifoliate and white mulberry shoots. The highest, 100mM octane was characterized by inhibitory effect to GDH activity in all plants. Octane had inductive effect on malate dehydrogenase in almost all plants and tested concentrations, indicating the intensification of Trycarboxylic Acid Cycle. The data could be suggested for elaboration of criteria for plant selection for phytoremediation of oil hydrocarbons contaminated soils.Keywords: Higher plants, hydrocarbons, cell ultrastructure, glutamate and malate dehydrogenases.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 193376 Influence of Overfeeding on Productive Performance Traits, Foie Gras Production, Blood Parameters, Internal Organs, Carcass Traits, and Mortality Rate in Two Breeds of Ducks
Authors: El-Sayed, Mona, Y., U. E. Mahrous
Abstract:
A total of 60 male mule ducks and 60 male Muscovy ducks were allotted into three groups (n = 20) to estimate the effects of overfeeding (two and four meals) versus ad libitum feeding on productive performance traits, foie gras production, internal organs, and blood parameters.
The results show that force-feeding four meals significantly increased (P < 0.01) body weight, weight gain, and gain percentage compared to force-feeding two meals. Both force-feeding regimes (two or four meals) induced significantly higher body weight, weight gain, gain percentage, and absolute carcass weight than ad libitum feeding; however, carcass percentage was significantly higher in ad libitum feeding. Mule ducks had significantly higher weight gain and weight gain percentages than Muscovy ducks.
Feed consumption per kilogram of foie gras and per kilogram weight gain was lower for the four-meal than for the two-meal forced feeding regime. Force-feeding four meals induced significantly higher liver weight and percentage (488.96 ± 25.78g, 7.82 ± 0.40%) than force-feeding two meals (381.98 ± 13.60g, 6.42 ± 0.21%). Moreover, feed conversion was significantly higher under forced feeding than under ad libitum feeding (77.65 ± 3.41g, 1.72 ± 0.05%; P < 0.01).
Forced feeding (two or four meals) increased all organ weights (intestine, proventriculus, heart, spleen, and pancreas) over ad libitum feeding weights, except for the gizzard; however intestinal and abdominal fat values were higher for four-meal forced feeding than for two-meal forced feeding.
Overfeeding did not change blood parameters significantly compared to ad libitum feeding; however, four-meal forced feeding improved the quality of foie gras since it significantly increased the percentage of grade A foie gras (62.5%) at the expense of grades B (33.33%) and C (4.17%) compared with the two-meal forced feeding.
The mortality percentage among Muscovy ducks during the forced feeding period was 22.5%, compared to 0% in mule ducks. Liver weight was highly significantly correlated with life weight after overfeeding and certain blood plasma traits.
Keywords: Foie gras, overfeeding, ducks, productive performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 255775 Polymeric Sustained Biodegradable Patch Formulation for Wound Healing
Authors: Abhay Asthana, Gyati Shilakari Asthana
Abstract:
It is the patient compliance and stability in combination with controlled drug delivery and biocompatibility that forms the core feature in present research and development of sustained biodegradable patch formulation intended for wound healing. The aim was to impart sustained degradation, sterile formulation, significant folding endurance, elasticity, biodegradability, bio-acceptability and strength. The optimized formulation comprised of polymers including Hydroxypropyl methyl cellulose, Ethylcellulose, and Gelatin, and Citric Acid PEG Citric acid (CPEGC) triblock dendrimers and active Curcumin. Polymeric mixture dissolved in geometric order in suitable medium through continuous stirring under ambient conditions. With continued stirring Curcumin was added with aid of DCM and Methanol in optimized ratio to get homogenous dispersion. The dispersion was sonicated with optimum frequency and for given time and later casted to form a patch form. All steps were carried out under strict aseptic conditions. The formulations obtained in the acceptable working range were decided based on thickness, uniformity of drug content, smooth texture and flexibility and brittleness. The patch kept on stability using butter paper in sterile pack displayed folding endurance in range of 20 to 23 times without any evidence of crack in an optimized formulation at room temperature (RT) (24 ± 2°C). The patch displayed acceptable parameters after stability study conducted in refrigerated conditions (8±0.2°C) and at RT (24 ± 2°C) up to 90 days. Further, no significant changes were observed in critical parameters such as elasticity, biodegradability, drug release and drug content during stability study conducted at RT 24±2°C for 45 and 90 days. The drug content was in range 95 to 102%, moisture content didn’t exceeded 19.2% and patch passed the content uniformity test. Percentage cumulative drug release was found to be 80% in 12h and matched the biodegradation rate as drug release with correlation factor R2>0.9. The biodegradable patch based formulation developed shows promising results in terms of stability and release profiles.Keywords: Sustained biodegradation, wound healing, polymeric patch, stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 230474 Sediment Transport Monitoring in the Port of Veracruz Expansion Project
Authors: Francisco Liaño-Carrera, José Isaac Ramírez-Macías, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga, Marcos Rangel-Avalos, Adriana Andrea Roldán-Ubando
Abstract:
The construction of most coastal infrastructure developments around the world are usually made considering wave height, current velocities and river discharges; however, little effort has been paid to surveying sediment transport during dredging or the modification to currents outside the ports or marinas during and after the construction. This study shows a complete survey during the construction of one of the largest ports of the Gulf of Mexico. An anchored Acoustic Doppler Current Velocity profiler (ADCP), a towed ADCP and a combination of model outputs were used at the Veracruz port construction in order to describe the hourly sediment transport and current modifications in and out of the new port. Owing to the stability of the system the new port was construction inside Vergara Bay, a low wave energy system with a tidal range of up to 0.40 m. The results show a two-current system pattern within the bay. The north side of the bay has an anticyclonic gyre, while the southern part of the bay shows a cyclonic gyre. Sediment transport trajectories were made every hour using the anchored ADCP, a numerical model and the weekly data obtained from the towed ADCP within the entire bay. The sediment transport trajectories were carefully tracked since the bay is surrounded by coral reef structures which are sensitive to sedimentation rate and water turbidity. The survey shows that during dredging and rock input used to build the wave breaker sediments were locally added (< 2500 m2) and local currents disperse it in less than 4 h. While the river input located in the middle of the bay and the sewer system plant may add more than 10 times this amount during a rainy day or during the tourist season. Finally, the coastal line obtained seasonally with a drone suggests that the southern part of the bay has not been modified by the construction of the new port located in the northern part of the bay, owing to the two subsystem division of the bay.
Keywords: Acoustic Doppler current profiler, time series, port construction, construction around coral reefs, sediment transport monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1280