Search results for: Single Crystal
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1737

Search results for: Single Crystal

177 Modification of Electrical and Switching Characteristics of a Non Punch-Through Insulated Gate Bipolar Transistor by Gamma Irradiation

Authors: Hani Baek, Gwang Min Sun, Chansun Shin, Sung Ho Ahn

Abstract:

Fast neutron irradiation using nuclear reactors is an effective method to improve switching loss and short circuit durability of power semiconductor (insulated gate bipolar transistors (IGBT) and insulated gate transistors (IGT), etc.). However, not only fast neutrons but also thermal neutrons, epithermal neutrons and gamma exist in the nuclear reactor. And the electrical properties of the IGBT may be deteriorated by the irradiation of gamma. Gamma irradiation damages are known to be caused by Total Ionizing Dose (TID) effect and Single Event Effect (SEE), Displacement Damage. Especially, the TID effect deteriorated the electrical properties such as leakage current and threshold voltage of a power semiconductor. This work can confirm the effect of the gamma irradiation on the electrical properties of 600 V NPT-IGBT. Irradiation of gamma forms lattice defects in the gate oxide and Si-SiO2 interface of the IGBT. It was confirmed that this lattice defect acts on the center of the trap and affects the threshold voltage, thereby negatively shifted the threshold voltage according to TID. In addition to the change in the carrier mobility, the conductivity modulation decreases in the n-drift region, indicating a negative influence that the forward voltage drop decreases. The turn-off delay time of the device before irradiation was 212 ns. Those of 2.5, 10, 30, 70 and 100 kRad(Si) were 225, 258, 311, 328, and 350 ns, respectively. The gamma irradiation increased the turn-off delay time of the IGBT by approximately 65%, and the switching characteristics deteriorated.

Keywords: NPT-IGBT, gamma irradiation, switching, turn-off delay time, recombination, trap center.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 871
176 A Look at the History of Calligraphy in Decoration of Mosques in Iran: 630-1630 AD

Authors: Cengiz Tavşan, Niloufar Akbarzadeh

Abstract:

Architecture in Iran has a continuous history from at least 5000 BC to the present, and numerous Iranian pre-Islamic elements have contributed significantly to the formation of Islamic art. At first, decoration was limited to small objects and containers and then progressed in the art of plaster and brickwork. They later applied in architecture as well. The art of gypsum and brickwork, which was prevalent in the form of motifs (animals and plants) in pre-Islam, was used in the aftermath of Islam with the art of calligraphy in decorations. The splendor and beauty of Iranian architecture, especially during the Islamic era, are related to decoration and design. After the invasion of Iran by the Arabs and the introduction of Islam to Iran, the arrival of the Iranian classical architecture significantly changed, and we saw the Arabic calligraphy decoration of the mosques in Iran. The principles of aesthetics in the art of calligraphy in Iran are based precisely on the principles of the beauty of ancient Iranian and Islamic art. On the other hand, after Islam, calligraphy was one of the most important sources of Islamic art in Islam and one of the important features of Islamic culture. First, the calligraphy had no cultural meaning and was only for decoration and beautification, it had the same meaning only in the inscriptions; however, over time, it became meaningful. This article provides a summary of the history of calligraphy in the mosques (from the entrance to Islam until the Safavid period), which cannot ignore the role of the calligraphy in their decorative ideas; and also, the important role that decorative elements play in creating a public space in terms of social and aesthetic performance. This study was conducted using library studies and field studies. The purpose of this study is to show the characteristics of architecture and art of decorations in Iran, especially in the mosque's architecture, which reaches the pinnacle of progress. We will see that religious beliefs and artistic practices are merging and trying to bring a single concept.

Keywords: Islamic art, Islamic architecture, decorations in Iranian mosques, calligraphy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2192
175 Host Responses in Peri-Implant Tissue in Comparison to Periodontal Tissue

Authors: Raviporn Madarasmi, Anjalee Vacharaksa, Pravej Serichetaphongse

Abstract:

The host response in peri-implant tissue may differ from that in periodontal tissue in a healthy individual. The purpose of this study is to investigate the expression of inflammatory cytokines in peri-implant crevicular fluid (PICF) from single implant with different abutment types in comparison to healthy periodontal tissue. 19 participants with healthy implants and teeth were recruited according to inclusion and exclusion criteria. PICF and gingival crevicular fluid (GCF) was collected using sterile paper points. The expression level of inflammatory cytokines including IL-1α, IL-1β, TNF-α, IFN-γ, IL-6, and IL-8 was assessed using enzyme-linked immunosorbent assay (ELISA). Paired t test was used to compare the expression levels of inflammatory cytokines around natural teeth and peri-implant in PICF and GCF of the same individual. The Independent t-test was used to compare the expression levels of inflammatory cytokines in PICF from titanium and UCLA abutment. Expression of IL-6, TNF-α, and IFN-γ in PICF was not statistically different from GCF among titanium and UCLA abutment group. However, the level of IL-1α in the PICF from the implants with UCLA abutment was significantly higher than GCF (P=0.030). In addition, the level of IL-1β in PICF from the implants with titanium abutment was significantly higher than GCF (P=0.032). When different abutment types was compared, IL-8 expression in PICF from implants with UCLA abutment was significantly higher than titanium abutment (P=0.003).

Keywords: Abutment, dental implant, gingival crevicular fluid and peri-implant crevicular fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 925
174 A Remote Sensing Approach for Vulnerability and Environmental Change in Apodi Valley Region, Northeast Brazil

Authors: Mukesh Singh Boori, Venerando Eustáquio Amaro

Abstract:

The objective of this study was to improve our understanding of vulnerability and environmental change; it's causes basically show the intensity, its distribution and human-environment effect on the ecosystem in the Apodi Valley Region, This paper is identify, assess and classify vulnerability and environmental change in the Apodi valley region using a combined approach of landscape pattern and ecosystem sensitivity. Models were developed using the following five thematic layers: Geology, geomorphology, soil, vegetation and land use/cover, by means of a Geographical Information Systems (GIS)-based on hydro-geophysical parameters. In spite of the data problems and shortcomings, using ESRI-s ArcGIS 9.3 program, the vulnerability score, to classify, weight and combine a number of 15 separate land cover classes to create a single indicator provides a reliable measure of differences (6 classes) among regions and communities that are exposed to similar ranges of hazards. Indeed, the ongoing and active development of vulnerability concepts and methods have already produced some tools to help overcome common issues, such as acting in a context of high uncertainties, taking into account the dynamics and spatial scale of asocial-ecological system, or gathering viewpoints from different sciences to combine human and impact-based approaches. Based on this assessment, this paper proposes concrete perspectives and possibilities to benefit from existing commonalities in the construction and application of assessment tools.

Keywords: Vulnerability, Land use/cover, Ecosystem, Remotesensing, GIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2946
173 International Comparative Study of International Financial Reporting Standards Adoption and Earnings Quality: Effects of Differences in Accounting Standards, Industry Category, and Country Characteristics

Authors: Ichiro Mukai

Abstract:

The purpose of this study is to investigate whether firms applying International Financial Reporting Standards (IFRS), provide high-quality and comparable earnings information that is useful for decision making of information users relative to firms applying local Generally Accepted Accounting Principles (GAAP). Focus is placed on the earnings quality of listed firms in several developed countries: Australia, Canada, France, Germany, Japan, the United Kingdom (UK), and the United States (US). Except for Japan and the US, the adoption of IFRS is mandatory for listed firms in these countries. In Japan, the application of IFRS is allowed for specific listed firms. In the US, the foreign firms listed on the US securities market are permitted to apply IFRS but the listed domestic firms are prohibited from doing so. In this paper, the differences in earnings quality are compared between firms applying local GAAP and those applying IFRS in each country and industry category, and the reasons of differences in earnings quality are analyzed using various factors. The results show that, although the earnings quality of firms applying IFRS is higher than that of firms applying local GAAP, this varies with country and industry category. Thus, even if a single set of global accounting standards is used for all listed firms worldwide, it is difficult to establish comparability of financial information among global firms. These findings imply that various circumstances surrounding firms, industries, and countries etc. influence business operations and affect the differences in earnings quality.

Keywords: Accruals, earnings quality, IFRS, information comparability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 766
172 Life Cycle Assessment Comparison between Methanol and Ethanol Feedstock for the Biodiesel from Soybean Oil

Authors: Pawit Tangviroon, Apichit Svang-Ariyaskul

Abstract:

As the limited availability of petroleum-based fuel has been a major concern, biodiesel is one of the most attractive alternative fuels because it is renewable and it also has advantages over the conventional petroleum-base diesel. At Present, productions of biodiesel generally perform by transesterification of vegetable oils with low molecular weight alcohol, mainly methanol, using chemical catalysts. Methanol is petrochemical product that makes biodiesel producing from methanol to be not pure renewable energy source. Therefore, ethanol as a product produced by fermentation processes. It appears as a potential feed stock that makes biodiesel to be pure renewable alternative fuel. The research is conducted based on two biodiesel production processes by reacting soybean oils with methanol and ethanol. Life cycle assessment was carried out in order to evaluate the environmental impacts and to identify the process alternative. Nine mid-point impact categories are investigated. The results indicate that better performance on abiotic depletion potential (ADP) and acidification potential (AP) are observed in biodiesel production from methanol when compared with biodiesel production from ethanol due to less energy consumption during the production processes. Except for ADP and AP, using methanol as feed stock does not show any advantages over biodiesel from ethanol. The single score method is also included in this study in order to identify the best option between two processes of biodiesel production. The global normalization and weighting factor based on ecotaxes are used and it shows that producing biodiesel form ethanol has less environmental load compare to biodiesel from methanol.

Keywords: Biodiesel, Ethanol, Life Cycle Assessment, Methanol, Soybean Oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3401
171 Modeling and Simulation of Overcurrent and Earth Fault Relay with Inverse Definite Minimum Time

Authors: Win Win Tun, Han Su Yin, Ohn Zin Lin

Abstract:

Transmission networks are an important part of an electric power system. The transmission lines not only have high power transmission capacity but also they are prone of larger magnitudes. Different types of faults occur in transmission lines such as single line to ground (L-G) fault, double line to ground (L-L-G) fault, line to line (L-L) fault and three phases (L-L-L) fault. These faults are needed to be cleared quickly in order to reduce damage caused to the system and they have high impact on the electrical power system equipment’s which are connected in transmission line. The main fault in transmission line is L-G fault. Therefore, protection relays are needed to protect transmission line. Overcurrent and earth fault relay is an important relay used to protect transmission lines, distribution feeders, transformers and bus couplers etc. Sometimes these relays can be used as main protection or backup protection. The modeling of protection relays is important to indicate the effects of network parameters and configurations on the operation of relays. Therefore, the modeling of overcurrent and earth fault relay is described in this paper. The overcurrent and earth fault relays with standard inverse definite minimum time are modeled and simulated by using MATLAB/Simulink software. The developed model was tested with L-G, L-L-G, L-L and L-L-L faults with various fault locations and fault resistance (0.001Ω). The simulation results are obtained by MATLAB software which shows the feasibility of analysis of transmission line protection with overcurrent and earth fault relay.

Keywords: Transmission line, overcurrent and earth fault relay, standard inverse definite minimum time, various faults, MATLAB Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994
170 Combination of Tensile Strength and Elongation of Reverse Rolled TaNbHfZrTi Refractory High Entropy Alloy

Authors: M. Veerasham

Abstract:

The refractory high entropy alloys are potential materials for high-temperature applications because of their ability to retain high strength up to 1600°C. However, their practical applications were limited due to poor elongation at room temperature. Therefore, decreasing the average valence electron concentrations (VEC) is an effective design strategy to improve the intrinsic ductility of refractory high entropy alloys. In this work, the high-entropy alloy TaNbHfZrTi was processed at room temperature by each step reverse rolling up to a 90% reduction in thickness. Subsequently, the reverse rolled 90% samples were utilized for annealing treatment at 800°C and 1000°C for 1 h to understand phase stability, microstructure, texture, and mechanical properties. The reverse rolled 90% condition contains body-centered cubic (BCC) single-phase; upon annealing at 800 °C, the formation of secondary phase BCC-2 prevailed. The partial recrystallization and complete recrystallization microstructures were developed for annealed at 800°C and 1000°C, respectively. The reverse rolled condition and 1000°C annealed temperature exhibit extraordinary room temperature tensile properties with high ultimate tensile strength (UTS) without compromising loss of ductility called “strength-ductility” trade-off. The reverse-rolled 90% and annealing treatment carried out at temperature about 1000°C for 1 h consist of UTS 1430 MPa and 1556 MPa with an appreciable amount of 21% and 20% elongation, respectively. The development of hierarchical microstructure prevailed for the annealed 1000°C which led to the simultaneous increase in tensile strength and elongation.

Keywords: refractory high entropy alloys, reverse rolling, recrystallization, microstructure, tensile properties

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 545
169 Genetic Algorithm Based Approach for Actuator Saturation Effect on Nonlinear Controllers

Authors: M. Mohebbi, K. Shakeri

Abstract:

In the real application of active control systems to mitigate the response of structures subjected to sever external excitations such as earthquake and wind induced vibrations, since the capacity of actuators is limited then the actuators saturate. Hence, in designing controllers for linear and nonlinear structures under sever earthquakes, the actuator saturation should be considered as a constraint. In this paper optimal design of active controllers for nonlinear structures by considering the actuator saturation has been studied. To this end a method has been proposed based on defining an optimization problem which considers the minimizing of the maximum displacement of the structure as objective when a limited capacity for actuator has been used as a constraint in optimization problem. To evaluate the effectiveness of the proposed method, a single degree of freedom (SDF) structure with a bilinear hysteretic behavior has been simulated under a white noise ground acceleration of different amplitudes. Active tendon control mechanism, comprised of pre-stressed tendons and an actuator, and extended nonlinear Newmark method based instantaneous optimal control algorithm have been used as active control mechanism and algorithm. To enhance the efficiency of the controllers, the weights corresponding to displacement, velocity, acceleration and control force in the performance index have been found by using the Distributed Genetic Algorithm (DGA). According to the results it has been concluded that the proposed method has been effective in considering the actuator saturation in designing optimal controllers for nonlinear frames. Also it has been shown that the actuator capacity and the average value of required control force are two important factors in designing nonlinear controllers for considering the actuator saturation.

Keywords: Active control, Actuator Saturation, Nonlinear, Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1455
168 Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment

Authors: Tasneem Halawani, Yamen Khateeb

Abstract:

With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.

Keywords: Automation, customer value, heterogenic, integration, IT services, optimization, processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 665
167 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method

Authors: F. C. Amadi, G. C. Enyi, G. Nasr

Abstract:

Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.

Keywords: Special core analysis (SCAL), relative permeability, capillary pressures, drainage, imbibition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1816
166 A Survey of 2nd Year Students’ Frequent English Writing Errors and the Effects of Participatory Error Correction Process

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are 1) to study the effects of participatory error correction process and 2) to find out the students’ satisfaction of such error correction process. This study is a Quasi Experimental Research with single group, in which data is collected 5 times preceding and following 4 experimental studies of participatory error correction process including providing coded indirect corrective feedback in the students’ texts with error treatment activities. Samples include 52 2nd year English Major students, Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tools for data collection include 5 writing tests of short texts and a questionnaire. Based on formative evaluation of the students’ writing ability prior to and after each of the 4 experiments, the research findings disclose the students’ higher scores with statistical difference at 0.00. Moreover, in terms of the effect size of such process, it is found that for mean of the students’ scores prior to and after the 4 experiments; d equals 0.6801, 0.5093, 0.5071, and 0.5296 respectively. It can be concluded that participatory error correction process enables all of the students to learn equally well and there is improvement in their ability to write short texts. Finally the students’ overall satisfaction of the participatory error correction process is in high level (Mean = 4.39, S.D. = 0.76).

Keywords: Coded indirect corrective feedback, participatory error correction process, error treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782
165 Life Estimation of Induction Motor Insulation under Non-Sinusoidal Voltage and Current Waveforms Using Fuzzy Logic

Authors: Triloksingh G. Arora, Mohan V. Aware, Dhananjay R. Tutakne

Abstract:

Thyristor based firing angle controlled voltage regulators are extensively used for speed control of single phase induction motors. This leads to power saving but the applied voltage and current waveforms become non-sinusoidal. These non-sinusoidal waveforms increase voltage and thermal stresses which result into accelerated insulation aging, thus reducing the motor life. Life models that allow predicting the capability of insulation under such multi-stress situations tend to be very complex and somewhat impractical. This paper presents the fuzzy logic application to investigate the synergic effect of voltage and thermal stresses on intrinsic aging of induction motor insulation. A fuzzy expert system is developed to estimate the life of induction motor insulation under multiple stresses. Three insulation degradation parameters, viz. peak modification factor, wave shape modification factor and thermal loss are experimentally obtained for different firing angles. Fuzzy expert system consists of fuzzyfication of the insulation degradation parameters, algorithms based on inverse power law to estimate the life and defuzzyficaton process to output the life. An electro-thermal life model is developed from the results of fuzzy expert system. This fuzzy logic based electro-thermal life model can be used for life estimation of induction motors operated with non-sinusoidal voltage and current waveforms.

Keywords: Aging, Dielectric losses, Insulation and Life Estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3053
164 Effect of Acids with Different Chain Lengths Modified by Methane Sulfonic Acid and Temperature on the Properties of Thermoplastic Starch/Glycerin Blends

Authors: Chi-Yuan Huang, Mei-Chuan Kuo, Ching-Yi Hsiao

Abstract:

In this study, acids with various chain lengths (C6, C8, C10 and C12) modified by methane sulfonic acid (MSA) and temperature were used to modify tapioca starch (TPS), then the glycerol (GA) were added into modified starch, to prepare new blends. The mechanical properties, thermal properties and physical properties of blends were studied. This investigation was divided into two parts.  First, the biodegradable materials were used such as starch and glycerol with hexanedioic acid (HA), suberic acid (SBA), sebacic acid (SA), decanedicarboxylic acid (DA) manufacturing with different temperatures (90, 110 and 130 °C). And then, the solution was added into modified starch to prepare the blends by using single-screw extruder. The FT-IR patterns indicated that the characteristic peak of C=O in ester was observed at 1730 cm-1. It is proved that different chain length acids (C6, C8, C10 and C12) reacted with glycerol by esterification and these are used to plasticize blends during extrusion. In addition, the blends would improve the hydrolysis and thermal stability. The water contact angle increased from 43.0° to 64.0°.  Second, the HA (110 °C), SBA (110 °C), SA (110 °C), and DA blends (130 °C) were used in study, because they possessed good mechanical properties, water resistances and thermal stability. On the other hand, the various contents (0, 0.005, 0.010, 0.020 g) of MSA were also used to modify the mechanical properties of blends. We observed that the blends were added to MSA, and then the FT-IR patterns indicated that the C=O ester appeared at 1730 cm-1. For this reason, the hydrophobic blends were produced. The water contact angle of the MSA blends increased from 55.0° to 71.0°. Although break elongation of the MSA blends reduced from the original 220% to 128%, the stress increased from 2.5 MPa to 5.1 MPa. Therefore, the optimal composition of blends was the DA blend (130 °C) with adding of MSA (0.005 g).

Keywords: Chain length acids, methane sulfonic acid, tapioca starch, tensile stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
163 Time Series Forecasting Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean   Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1170
162 Organoclay of Cetyl Trimethyl Ammonium- Montmorillonite: Preparation and Study in Adsorption of Benzene-Toluene-2-Chlorophenol

Authors: Is Fatimah, Winda Novita, Yopi Andika, Imam Sahroni, Basitoh Djaelani, Yuyun Yunani N.

Abstract:

Contamination of aromatic compounds in water can cause severe long-lasting effects not only for biotic organism but also on human health. Several alternative technologies for remediation of polluted water have been attempted. One of these is adsorption process of aromatic compounds by using organic modified clay mineral. Porous structure of clay is potential properties for molecular adsorptivity and it can be increased by immobilizing hydrophobic structure to attract organic compounds. In this work natural montmorillonite were modified with cetyltrimethylammonium (CTMA+) and was evaluated for use as adsorbents of aromatic compounds: benzene, toluene, and 2-chloro phenol in its single and multicomponent solution by ethanol:water solvent. Preparation of CTMA-montmorillonite was conducted by simple ion exchange procedure and characterization was conducted by using x-day diffraction (XRD), Fourier-transform infra red (FTIR) and gas sorption analysis. The influence of structural modification of montmorillonite on its adsorption capacity and adsorption affinity of organic compound were studied. It was shown that adsorptivity of montmorillonite was increased by modification associated with arrangements of CTMA+ in the structure even the specific surface area of modified montmorillonite was lower than raw montmorillonite. Adsorption rate indicated that material has affinity to adsorb compound by following order: benzene> toluene > 2-chloro phenol. The adsorption isotherms of benzene and toluene showed 1st order adsorption kinetic indicating a partition phenomenon of compounds between the aqueous and organophilic CTMAmontmorillonite.

Keywords: Adsorption, Desorption, Montmorillonite, Organoclay, Surfactant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2445
161 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG

Authors: Mamta Garg

Abstract:

While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.

Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
160 Combustion Improvements by C4/C5 Bio-Alcohol Isomer Blended Fuels Combined with Supercharging and EGR in a Diesel Engine

Authors: Yasufumi Yoshimoto, Enkhjargal Tserenochir, Eiji Kinoshita, Takeshi Otaka

Abstract:

Next generation bio-alcohols produced from non-food based sources like cellulosic biomass are promising renewable energy sources. The present study investigates engine performance, combustion characteristics, and emissions of a small single cylinder direct injection diesel engine fueled by four kinds of next generation bio-alcohol isomer and diesel fuel blends with a constant blending ratio of 3:7 (mass). The tested bio-alcohol isomers here are n-butanol and iso-butanol (C4 alcohol), and n-pentanol and iso-pentanol (C5 alcohol). To obtain simultaneous reductions in NOx and smoke emissions, the experiments employed supercharging combined with EGR (Exhaust Gas Recirculation). The boost pressures were fixed at two conditions, 100 kPa (naturally aspirated operation) and 120 kPa (supercharged operation) provided with a roots blower type supercharger. The EGR rates were varied from 0 to 25% using a cooled EGR technique. The results showed that both with and without supercharging, all the bio-alcohol blended diesel fuels improved the trade-off relation between NOx and smoke emissions at all EGR rates while maintaining good engine performance, when compared with diesel fuel operation. It was also found that regardless of boost pressure and EGR rate, the ignition delays of the tested bio-alcohol isomer blends are in the order of iso-butanol > n-butanol > iso-pentanol > n-pentanol. Overall, it was concluded that, except for the changes in the ignition delays the influence of bio-alcohol isomer blends on the engine performance, combustion characteristics, and emissions are relatively small.

Keywords: Alternative fuel,  Butanol, Diesel engine, EGR, Next generation bio-alcohol isomer blended fuel, Pentanol, Supercharging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 746
159 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation

Authors: A. Yanik, U. Aldemir

Abstract:

This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.

Keywords: Bridge structures, passive control, seismic, semi-active control, viscous damping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764
158 Evaluation of Efficient CSI Based Channel Feedback Techniques for Adaptive MIMO-OFDM Systems

Authors: Muhammad Rehan Khalid, Muhammad Haroon Siddiqui, Danish Ilyas

Abstract:

This paper explores the implementation of adaptive coding and modulation schemes for Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) feedback systems. Adaptive coding and modulation enables robust and spectrally-efficient transmission over time-varying channels. The basic premise is to estimate the channel at the receiver and feed this estimate back to the transmitter, so that the transmission scheme can be adapted relative to the channel characteristics. Two types of codebook based channel feedback techniques are used in this work. The longterm and short-term CSI at the transmitter is used for efficient channel utilization. OFDM is a powerful technique employed in communication systems suffering from frequency selectivity. Combined with multiple antennas at the transmitter and receiver, OFDM proves to be robust against delay spread. Moreover, it leads to significant data rates with improved bit error performance over links having only a single antenna at both the transmitter and receiver. The coded modulation increases the effective transmit power relative to uncoded variablerate variable-power MQAM performance for MIMO-OFDM feedback system. Hence proposed arrangement becomes an attractive approach to achieve enhanced spectral efficiency and improved error rate performance for next generation high speed wireless communication systems.

Keywords: Adaptive Coded Modulation, MQAM, MIMO, OFDM, Codebooks, Feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1908
157 Rear Seat Belt Use in Developing Countries: A Case Study from the United Arab Emirates

Authors: Salaheddine Bendak, Sara S. Alnaqbi

Abstract:

The seat belt is a vital tool in improving traffic safety conditions and minimising injuries due to traffic accidents. Most developing countries are facing a big problems associated with the human and financial losses due to traffic accidents. One way to minimise these losses is the use of seat belts by passengers both in the front and rear seats of a vehicle; however, at the same time, close to nothing is known about the rates of seat belt utilisation among rear seat passengers in many developing countries. Therefore, there is a need to estimate these rates in order to know the extent of this problem and how people interact with traffic safety measures like seat belts and find demographic characteristics that contribute to wearing or non-wearing of seat belts with the aim of finding solutions to improve wearing rates. In this paper, an observational study was done to gather data on restraints use in motor vehicle rear seats in eight observational stations in a rapidly developing country, the United Arab Emirates (UAE), and estimate a use rate for the whole country. Also, a questionnaire was used in order to study demographic characteristics affecting the wearing of seatbelts in rear seats. Results of the observational study showed that the overall wearing/usage rate was 12.3%, which is considered very low when compared to other countries. Survey results show that single, male, less educated passengers from Arab and South Asian backgrounds use seat belts reportedly less than others. Finally, solutions are put forward to improve this wearing rate based on the results of this study.

Keywords: Seat belts, traffic crashes, United Arab Emirates, rear seats.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1055
156 Automated Natural Hazard Zonation System with Internet-SMS Warning: Distributed GIS for Sustainable Societies Creating Schema & Interface for Mapping & Communication

Authors: Devanjan Bhattacharya, Jitka Komarkova

Abstract:

The research describes the implementation of a novel and stand-alone system for dynamic hazard warning. The system uses all existing infrastructure already in place like mobile networks, a laptop/PC and the small installation software. The geospatial dataset are the maps of a region which are again frugal. Hence there is no need to invest and it reaches everyone with a mobile. A novel architecture of hazard assessment and warning introduced where major technologies in ICT interfaced to give a unique WebGIS based dynamic real time geohazard warning communication system. A never before architecture introduced for integrating WebGIS with telecommunication technology. Existing technologies interfaced in a novel architectural design to address a neglected domain in a way never done before – through dynamically updatable WebGIS based warning communication. The work publishes new architecture and novelty in addressing hazard warning techniques in sustainable way and user friendly manner. Coupling of hazard zonation and hazard warning procedures into a single system has been shown. Generalized architecture for deciphering a range of geo-hazards has been developed. Hence the developmental work presented here can be summarized as the development of internet-SMS based automated geo-hazard warning communication system; integrating a warning communication system with a hazard evaluation system; interfacing different open-source technologies towards design and development of a warning system; modularization of different technologies towards development of a warning communication system; automated data creation, transformation and dissemination over different interfaces. The architecture of the developed warning system has been functionally automated as well as generalized enough that can be used for any hazard and setup requirement has been kept to a minimum.

Keywords: Geospatial, web-based GIS, geohazard, warning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
155 PRENACEL: Development and Evaluation of an M-Health Strategy to Improve Prenatal Care in Brazil

Authors: E. M. Vieira, C. S. Vieira, L. P. Bonifácio, L. M. de Oliveira Ciabati, A. C. A. Franzon, F. S. Zaratini, J. A. C. Sanchez, M. S. Andrade, J. P. Dias de Souza

Abstract:

The quality of prenatal care is key to reduce maternal morbidity and mortality. Communication between the health service and users can stimulate prevention and care. M-health has been an important and low cost strategy to health education. The PRENACEL programme (prenatal in the cell phone) was developed. It consists of a programme of information via SMS from the 20th week of pregnancy up to 12th week after delivery. Messages were about prenatal care, birth, contraception and breastfeeding. Communication of the pregnant woman asking questions about their health was possible. The objective of this study was to evaluate the implementation of PRENACEL as a useful complement to the standard prenatal care. Twenty health clinics were selected and randomized by cluster, 10 as the intervention group and 10 as the control group. In the intervention group, women and their partner were invited to participate. The control group received the standard prenatal care. All women were interviewed in the immediate post-partum and in the 12th and 24th week post-partum. Most women were married, had more than 8 years of schooling and visit the clinic more than 6 times during prenatal care. The intervention group presented lowest percentage of higher economic participants (5.6%), less single mothers and no drug user. It also presented more prenatal care visits than the control group and it was less likely to present Severe Acute Maternal Mortality when compared to control group as well as higher percentage of partners (75.4%) was present at the birth compared to control group. Although the study is still being carried out, preliminary data are showing positive results of the compliance of women to prenatal care.

Keywords: Cellphone, health technology, prenatal care, prevention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
154 Fractal Dimension of Breast Cancer Cell Migration in a Wound Healing Assay

Authors: R. Sullivan, T. Holden, G. Tremberger, Jr, E. Cheung, C. Branch, J. Burrero, G. Surpris, S. Quintana, A. Rameau, N. Gadura, H. Yao, R. Subramaniam, P. Schneider, S. A. Rotenberg, P. Marchese, A. Flamhlolz, D. Lieberman, T. Cheung

Abstract:

Migration in breast cancer cell wound healing assay had been studied using image fractal dimension analysis. The migration of MDA-MB-231 cells (highly motile) in a wound healing assay was captured using time-lapse phase contrast video microscopy and compared to MDA-MB-468 cell migration (moderately motile). The Higuchi fractal method was used to compute the fractal dimension of the image intensity fluctuation along a single pixel width region parallel to the wound. The near-wound region fractal dimension was found to decrease three times faster in the MDA-MB- 231 cells initially as compared to the less cancerous MDA-MB-468 cells. The inner region fractal dimension was found to be fairly constant for both cell types in time and suggests a wound influence range of about 15 cell layer. The box-counting fractal dimension method was also used to study region of interest (ROI). The MDAMB- 468 ROI area fractal dimension was found to decrease continuously up to 7 hours. The MDA-MB-231 ROI area fractal dimension was found to increase and is consistent with the behavior of a HGF-treated MDA-MB-231 wound healing assay posted in the public domain. A fractal dimension based capacity index has been formulated to quantify the invasiveness of the MDA-MB-231 cells in the perpendicular-to-wound direction. Our results suggest that image intensity fluctuation fractal dimension analysis can be used as a tool to quantify cell migration in terms of cancer severity and treatment responses.

Keywords: Higuchi fractal dimension, box-counting fractal dimension, cancer cell migration, wound healing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2544
153 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: Fracture mechanics, finite element method, stress intensity factor, stress gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 766
152 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
151 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products

Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad

Abstract:

The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.

Keywords: Alumina-coated magnetite nanoparticles, magnetic mixed hemimicell solid-phase extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1209
150 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449
149 Signing the First Packet in Amortization Scheme for Multicast Stream Authentication

Authors: Mohammed Shatnawi, Qusai Abuein, Susumu Shibusawa

Abstract:

Signature amortization schemes have been introduced for authenticating multicast streams, in which, a single signature is amortized over several packets. The hash value of each packet is computed, some hash values are appended to other packets, forming what is known as hash chain. These schemes divide the stream into blocks, each block is a number of packets, the signature packet in these schemes is either the first or the last packet of the block. Amortization schemes are efficient solutions in terms of computation and communication overhead, specially in real-time environment. The main effictive factor of amortization schemes is it-s hash chain construction. Some studies show that signing the first packet of each block reduces the receiver-s delay and prevents DoS attacks, other studies show that signing the last packet reduces the sender-s delay. To our knowledge, there is no studies that show which is better, to sign the first or the last packet in terms of authentication probability and resistance to packet loss. In th is paper we will introduce another scheme for authenticating multicast streams that is robust against packet loss, reduces the overhead, and prevents the DoS attacks experienced by the receiver in the same time. Our scheme-The Multiple Connected Chain signing the First packet (MCF) is to append the hash values of specific packets to other packets,then append some hashes to the signature packet which is sent as the first packet in the block. This scheme is aspecially efficient in terms of receiver-s delay. We discuss and evaluate the performance of our proposed scheme against those that sign the last packet of the block.

Keywords: multicast stream authentication, hash chain construction, signature amortization, authentication probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
148 An Autonomous Collaborative Forecasting System Implementation – The First Step towards Successful CPFR System

Authors: Chi-Fang Huang, Yun-Shiow Chen, Yun-Kung Chung

Abstract:

In the past decade, artificial neural networks (ANNs) have been regarded as an instrument for problem-solving and decision-making; indeed, they have already done with a substantial efficiency and effectiveness improvement in industries and businesses. In this paper, the Back-Propagation neural Networks (BPNs) will be modulated to demonstrate the performance of the collaborative forecasting (CF) function of a Collaborative Planning, Forecasting and Replenishment (CPFR®) system. CPFR functions the balance between the sufficient product supply and the necessary customer demand in a Supply and Demand Chain (SDC). Several classical standard BPN will be grouped, collaborated and exploited for the easy implementation of the proposed modular ANN framework based on the topology of a SDC. Each individual BPN is applied as a modular tool to perform the task of forecasting SKUs (Stock-Keeping Units) levels that are managed and supervised at a POS (point of sale), a wholesaler, and a manufacturer in an SDC. The proposed modular BPN-based CF system will be exemplified and experimentally verified using lots of datasets of the simulated SDC. The experimental results showed that a complex CF problem can be divided into a group of simpler sub-problems based on the single independent trading partners distributed over SDC, and its SKU forecasting accuracy was satisfied when the system forecasted values compared to the original simulated SDC data. The primary task of implementing an autonomous CF involves the study of supervised ANN learning methodology which aims at making “knowledgeable" decision for the best SKU sales plan and stocks management.

Keywords: CPFR, artificial neural networks, global logistics, supply and demand chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1993