Search results for: rate based model.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16975

Search results for: rate based model.

745 Investigating the Transformer Operating Conditions for Evaluating the Dielectric Response

Authors: Jalal M. Abdallah

Abstract:

This paper presents an experimental investigation of transformer dielectric response and solid insulation water content. The dielectric response was carried out on the base of Hybrid Frequency Dielectric Spectroscopy and Polarization Current measurements method (FDS &PC). The calculation of the water content in paper is based on the water content in oil and the obtained equilibrium curves. A reference measurements were performed at equilibrium conditions for water content in oil and paper of transformer at different stable temperatures (25, 50, 60 and 70°C) to prepare references to evaluate the insulation behavior at the not equilibrium conditions. Some measurements performed at the different simulated normal working modes of transformer operation at the same temperature where the equilibrium conditions. The obtained results show that when transformer temperature is mach more than the its ambient temperature, the transformer temperature decreases immediately after disconnecting the transformer from the network and this temperature reduction influences the transformer insulation condition in the measuring process. In addition to the oil temperature at the near places to the sensors, the temperature uniformity in transformer which can be changed by a big change in the load of transformer before the measuring time will influence the result. The investigations have shown that the extremely influence of the time between disconnecting the transformer and beginning the measurements on the results. And the online monitoring for water content in paper measurements, on the basis of the oil water content on line monitoring and the obtained equilibrium curves. The measurements where performed continuously and for about 50 days without any disconnection in the prepared the adiabatic room.

Keywords: Conductivity, Moisture, Temperature, Oil-paperinsulation, Online monitoring, Water content in oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2616
744 Geophysical Investigation for Pre-Engineering Construction Works in Part of Ilorin, Northcentral Nigeria

Authors: O. Ologe, A. I. Augie

Abstract:

A geophysical investigation involving geoelectric depths sounding has been conducted as pre-foundation study in part of Ilorin, Nigeria. The area is underlain by the Precambrian basement complex rocks. 15 sounding stations were established along five traverses. The Vertical Electrical Sounding (VES) (three-five) conducted along each of the traverses was subjected to computer iteration using IP2Win software. Three -five subsurface geologic layers were delineated in the study area. These include the topsoil with resistivity and thickness values ranging from 103 Ωm-210 Ωm and 0 m-1 m; lateritic (117 Ωm-590 Ωm and 1 m-4.7 m); sandy clay (137 – 859 Ωm and 2.9 m – 4.3 m); weathered (60.5 Ωm to 2539 Ωm and 3,2 m-10 m) and fresh basement (2253-∞ and 7.1 m-∞) respectively. The resistivity pseudosection shows continuous high resistivity zone on the surface. Resistivity of this layer from depth 0-5 m varies from 300-800 Ωm along traverse 1 and 2. Hence, this layer is rated competent as it has the ability to support engineering structure. However, along traverse 1, very low resistive layer occurs between VES 5 and 15 with resistivity values ranging from 30 Ωm-70 Ωm. This layer was rated incompetent based on the competence rating. This study revealed the importance of geophysical survey as a pre-construction engineering survey at any civil engineering site since it can reliably evaluate the competence of the subsurface geomaterials.

Keywords: Competence rating, geoelectric, pseudosection, soil, vertical electrical sounding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509
743 A Novel Method to Manufacture Superhydrophobic and Insulating Polyester Nanofibers via a Meso-Porous Aerogel Powder

Authors: Z. Mazrouei-Sebdani, A. Khoddami, H. Hadadzadeh, M. Zarrebini

Abstract:

In this research, waterglass based aerogel powder was prepared by sol–gel process and ambient pressure drying. Inspired by limited dust releasing, aerogel powder was introduced to the PET electrospinning solution in an attempt to create required bulk and surface structure for the nanofibers to improve their hydrophobic and insulation properties. The samples evaluation was carried out by measuring density, porosity, contact angle, heat transfer, FTIR, BET, and SEM. According to the results, porous silica aerogel powder was fabricated with mean pore diameter of 24 nm and contact angle of 145.9º. The results indicated the usefulness of the aerogel powder confined into nanofibers to control surface roughness for manipulating superhydrophobic nanowebs with water contact angle of 147º. It can be due to a multi-scale surface roughness which was created by nanowebs structure itself and nanofibers surface irregularity in presence of the aerogels while a layer of fluorocarbon created low surface energy. The wettability of a solid substrate is an important property that is controlled by both the chemical composition and geometry of the surface. Also, a decreasing trend in the heat transfer was observed from 22% for the nanofibers without any aerogel powder to 8% for the nanofibers with 4% aerogel powder. The development of thermal insulating materials has become increasingly more important than ever in view of the fossil energy depletion and global warming that call for more demanding energysaving practices.

Keywords: Superhydrophobicity, Insulation, Sol-gel, Surface energy, Roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2927
742 Received Signal Strength Indicator Based Localization of Bluetooth Devices Using Trilateration: An Improved Method for the Visually Impaired People

Authors: Muhammad Irfan Aziz, Thomas Owens, Uzair Khaleeq uz Zaman

Abstract:

The instantaneous and spatial localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles, is the most demanding and challenging issue faced by the navigation systems today. Since Bluetooth cannot utilize techniques like Time Difference of Arrival (TDOA) and Time of Arrival (TOA), it uses received signal strength indicator (RSSI) to measure Receive Signal Strength (RSS). The measurements using RSSI can be improved significantly by improving the existing methodologies related to RSSI. Therefore, the current paper focuses on proposing an improved method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the method, class 2 Bluetooth devices were used along with the development of a software. Experiments were then conducted to obtain surface plots that showed the signal interferences and other environmental effects. Finally, the results obtained show the surface plots for all Bluetooth modules used along with the strong and weak points depicted as per the color codes in red, yellow and blue. It was concluded that the suggested improved method of measuring RSS using trilateration helped to not only measure signal strength affectively but also highlighted how the signal strength can be influenced by atmospheric conditions such as noise, reflections, etc.

Keywords: Bluetooth, indoor/outdoor localization, received signal strength indicator, visually impaired.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736
741 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: Stacking, multi-layers, ensemble, multi-class.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1056
740 Fiber Braggs Grating Sensor Based Instrumentation to Evaluate Postural Balance and Stability on an Unstable Platform

Authors: Chethana K., Guru Prasad A. S., Vikranth H. N., Varun H., Omkar S. N., Asokan S.

Abstract:

This paper describes a novel application of Fiber Braggs Grating (FBG) sensors in the assessment of human postural stability and balance on an unstable platform. In this work, FBG sensor Stability Analyzing Device (FBGSAD) is developed for measurement of plantar strain to assess the postural stability of subjects on unstable platforms during different stances in eyes open and eyes closed conditions on a rocker board. The studies are validated by comparing the Centre of Gravity (CG) variations measured on the lumbar vertebra of subjects using a commercial accelerometer. The results obtained from the developed FBGSAD depict qualitative similarities with the data recorded by commercial accelerometer. The advantage of the FBGSAD is that it measures simultaneously plantar strain distribution and postural stability of the subject along with its inherent benefits like non-requirement of energizing voltage to the sensor, electromagnetic immunity and simple design which suits its applicability in biomechanical applications. The developed FBGSAD can serve as a tool/yardstick to mitigate space motion sickness, identify individuals who are susceptible to falls and to qualify subjects for balance and stability, which are important factors in the selection of certain unique professionals such as aircraft pilots, astronauts, cosmonauts etc.

Keywords: Biomechanics, Fiber Bragg Gratings, Plantar Strain Measurement, Postural Stability Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2801
739 A Software Framework for Predicting Oil-Palm Yield from Climate Data

Authors: Mohd. Noor Md. Sap, A. Majid Awan

Abstract:

Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.

Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
738 A Review of the Characteristics and Optimization of Optical Properties of Zirconia Ceramics for Aesthetic Dental Restorations

Authors: R. A. Shahmiri, O. C. Standard, J. N. Hart, C. C. Sorrell

Abstract:

The ceramic yttria-stabilized tetragonal zirconia polycrystal (Y-TZP) has been used as a dental biomaterial for several decades. The strength and toughness of this material can be accounted for by its toughening mechanisms, which include transformation toughening, crack deflection, zone shielding, contact shielding, and crack bridging. Prevention of crack propagation is of critical importance in high-fatigue situations, such as those encountered in mastication and para-function. However, the poor translucence of Y-TZP in polycrystalline form is such that it may not meet the aesthetic requirements due to its white/grey appearance. To improve the optical properties of Y-TZP, more detailed study of the optical properties is required; in particular, precise evaluation of the refractive index, absorption coefficient, and scattering coefficient are necessary. The measurement of the optical parameters has been based on the assumption that light scattered from biological media is isotropically distributed over all angles. In fact, the optical behavior of real biological materials depends on the angular scattering of light due to the anisotropic nature of the materials. The purpose of the present work is to evaluate the optical properties (including color, opacity/translucence, scattering, and fluorescence) of zirconia dental ceramics and their control through modification of the chemical composition, phase composition, and surface microstructure.

Keywords: Optical properties, opacity/translucence, scattering, fluorescence, chemical composition, phase composition, surface microstructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
737 A Framework for Improving Trade Contractors’ Productivity Tracking Methods

Authors: Sophia Hayes, Kenny L. Liang, Sahil Sharma, Austin Shema, Mahmoud Bader, Mohamed Elbarkouky

Abstract:

Despite being one of the most significant economic contributors of the country, Canada’s construction industry is lagging behind other sectors when it comes to labor productivity improvements. The construction industry is very collaborative as a general contractor, will hire trade contractors to perform most of a project’s work; meaning low productivity from one contractor can have a domino effect on the shared success of a project. To address this issue and encourage trade contractors to improve their productivity tracking methods, an investigative study was done on the productivity views and tracking methods of various trade contractors. Additionally, an in-depth review was done on four standard tracking methods used in the construction industry: cost codes, benchmarking, the job productivity measurement (JPM) standard, and WorkFace Planning (WFP). The four tracking methods were used as a baseline in comparing the trade contractors’ responses, determining gaps within their current tracking methods, and for making improvement recommendations. 15 interviews were conducted with different trades to analyze how contractors value productivity. The results of these analyses indicated that there seem to be gaps within the construction industry when it comes to an understanding of the purpose and value in productivity tracking. The trade contractors also shared their current productivity tracking systems; which were then compared to the four standard tracking methods used in the construction industry. Gaps were identified in their various tracking methods and using a framework; recommendations were made based on the type of trade on how to improve how they track productivity.

Keywords: Trade contractors’ productivity, productivity tracking, cost codes, benchmarking, job productivity measurement, JPM, workface planning WFP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 840
736 Mathematical Description of Functional Motion and Application as a Feeding Mode for General Purpose Assistive Robots

Authors: Martin Leroux, Sylvain Brisebois

Abstract:

Eating a meal is among the Activities of Daily Living, but it takes a lot of time and effort for people with physical or functional limitations. Dedicated technologies are cumbersome and not portable, while general-purpose assistive robots such as wheelchair-based manipulators are too hard to control for elaborate continuous motion like eating. Eating with such devices has not previously been automated, since there existed no description of a feeding motion for uncontrolled environments. In this paper, we introduce a feeding mode for assistive manipulators, including a mathematical description of trajectories for motions that are difficult to perform manually such as gathering and scooping food at a defined/desired pace. We implement these trajectories in a sequence of movements for a semi-automated feeding mode which can be controlled with a very simple 3-button interface, allowing the user to have control over the feeding pace. Finally, we demonstrate the feeding mode with a JACO robotic arm and compare the eating speed, measured in bites per minute of three eating methods: a healthy person eating unaided, a person with upper limb limitations or disability using JACO with manual control, and a person with limitations using JACO with the feeding mode. We found that the feeding mode allows eating about 5 bites per minute, which should be sufficient to eat a meal under 30min.

Keywords: Assistive robotics, Automated feeding, Elderly care, Trajectory design, Human-Robot Interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
735 Investigating the Demand for Short-shelf Life Food Products for SME Wholesalers

Authors: Yamini Raju, Parminder S. Kang, Adam Moroz, Ross Clement, Ashley Hopwell, Alistair Duffy

Abstract:

Accurate forecasting of fresh produce demand is one the challenges faced by Small Medium Enterprise (SME) wholesalers. This paper is an attempt to understand the cause for the high level of variability such as weather, holidays etc., in demand of SME wholesalers. Therefore, understanding the significance of unidentified factors may improve the forecasting accuracy. This paper presents the current literature on the factors used to predict demand and the existing forecasting techniques of short shelf life products. It then investigates a variety of internal and external possible factors, some of which is not used by other researchers in the demand prediction process. The results presented in this paper are further analysed using a number of techniques to minimize noise in the data. For the analysis past sales data (January 2009 to May 2014) from a UK based SME wholesaler is used and the results presented are limited to product ‘Milk’ focused on café’s in derby. The correlation analysis is done to check the dependencies of variability factor on the actual demand. Further PCA analysis is done to understand the significance of factors identified using correlation. The PCA results suggest that the cloud cover, weather summary and temperature are the most significant factors that can be used in forecasting the demand. The correlation of the above three factors increased relative to monthly and becomes more stable compared to the weekly and daily demand.

Keywords: Demand Forecasting, Deteriorating Products, Food Wholesalers, Principal Component Analysis and Variability Factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3334
734 X-Ray Intensity Measurement Using Frequency Output Sensor for Computed Tomography

Authors: R. M. Siddiqui, D. Z. Moghaddam, T. R. Turlapati, S. H. Khan, I. Ul Ahad

Abstract:

Quality of 2D and 3D cross-sectional images produce by Computed Tomography primarily depend upon the degree of precision of primary and secondary X-Ray intensity detection. Traditional method of primary intensity detection is apt to errors. Recently the X-Ray intensity measurement system along with smart X-Ray sensors is developed by our group which is able to detect primary X-Ray intensity unerringly. In this study a new smart X-Ray sensor is developed using Light-to-Frequency converter TSL230 from Texas Instruments which has numerous advantages in terms of noiseless data acquisition and transmission. TSL230 construction is based on a silicon photodiode which converts incoming X-Ray radiation into the proportional current signal. A current to frequency converter is attached to this photodiode on a single monolithic CMOS integrated circuit which provides proportional frequency count to incoming current signal in the form of the pulse train. The frequency count is delivered to the center of PICDEM FS USB board with PIC18F4550 microcontroller mounted on it. With highly compact electronic hardware, this Demo Board efficiently read the smart sensor output data. The frequency output approaches overcome nonlinear behavior of sensors with analog output thus un-attenuated X-Ray intensities could be measured precisely and better normalization could be acquired in order to attain high resolution.

Keywords: Computed tomography, detector technology, X-Ray intensity measurement

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2573
733 Resilient Manufacturing: Use of Augmented Reality to Advance Training and Operating Practices in Manual Assembly

Authors: L. C. Moreira, M. Kauffman

Abstract:

This paper outlines the results of an experimental research on deploying an emerging augmented reality (AR) system for real-time task assistance (or work instructions) of highly customised and high-risk manual operations. The focus is on human operators’ training effectiveness and performance and the aim is to test if such technologies can support enhancing the knowledge retention levels and accuracy of task execution to improve health and safety (H&S). An AR enhanced assembly method is proposed and experimentally tested using a real industrial process as case study for electric vehicles’ (EV) battery module assembly. The experimental results revealed that the proposed method improved the training practices and performance through increases in the knowledge retention levels from 40% to 84%, and accuracy of task execution from 20% to 71%, when compared to the traditional paper-based method. The results of this research validate and demonstrate how emerging technologies are advancing the choice for manual, hybrid or fully automated processes by promoting the XR-assisted processes, and the connected worker (a vision for Industry 4 and 5.0), and supporting manufacturing become more resilient in times of constant market changes.

Keywords: Augmented reality, extended reality, connected worker, XR-assisted operator, manual assembly 4.0, industry 5.0, smart training, battery assembly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 333
732 Investigating Polynomial Interpolation Functions for Zooming Low Resolution Digital Medical Images

Authors: Maninder Pal

Abstract:

Medical digital images usually have low resolution because of nature of their acquisition. Therefore, this paper focuses on zooming these images to obtain better level of information, required for the purpose of medical diagnosis. For this purpose, a strategy for selecting pixels in zooming operation is proposed. It is based on the principle of analog clock and utilizes a combination of point and neighborhood image processing. In this approach, the hour hand of clock covers the portion of image to be processed. For alignment, the center of clock points at middle pixel of the selected portion of image. The minute hand is longer in length, and is used to gain information about pixels of the surrounding area. This area is called neighborhood pixels region. This information is used to zoom the selected portion of the image. The proposed algorithm is implemented and its performance is evaluated for many medical images obtained from various sources such as X-ray, Computerized Tomography (CT) scan and Magnetic Resonance Imaging (MRI). However, for illustration and simplicity, the results obtained from a CT scanned image of head is presented. The performance of algorithm is evaluated in comparison to various traditional algorithms in terms of Peak signal-to-noise ratio (PSNR), maximum error, SSIM index, mutual information and processing time. From the results, the proposed algorithm is found to give better performance than traditional algorithms.

Keywords: Zooming, interpolation, medical images, resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
731 Predicting Mortality among Acute Burn Patients Using BOBI Score vs. FLAMES Score

Authors: S. Moustafa El Shanawany, I. Labib Salem, F. Mohamed Magdy Badr El Dine, H. Tag El Deen Abd Allah

Abstract:

Thermal injuries remain a global health problem and a common issue encountered in forensic pathology. They are a devastating cause of morbidity and mortality in children and adults especially in developing countries, causing permanent disfigurement, scarring and grievous hurt. Burns have always been a matter of legal concern in cases of suicidal burns, self-inflicted burns for false accusation and homicidal attempts. Assessment of burn injuries as well as rating permanent disabilities and disfigurement following thermal injuries for the benefit of compensation claims represents a challenging problem. This necessitates the development of reliable scoring systems to yield an expected likelihood of permanent disability or fatal outcome following burn injuries. The study was designed to identify the risk factors of mortality in acute burn patients and to evaluate the applicability of FLAMES (Fatality by Longevity, APACHE II score, Measured Extent of burn, and Sex) and BOBI (Belgian Outcome in Burn Injury) model scores in predicting the outcome. The study was conducted on 100 adult patients with acute burn injuries admitted to the Burn Unit of Alexandria Main University Hospital, Egypt from October 2014 to October 2015. Victims were examined after obtaining informed consent and the data were collected in specially designed sheets including demographic data, burn details and any associated inhalation injury. Each burn patient was assessed using both BOBI and FLAMES scoring systems. The results of the study show the mean age of patients was 35.54±12.32 years. Males outnumbered females (55% and 45%, respectively). Most patients were accidently burnt (95%), whereas suicidal burns accounted for the remaining 5%. Flame burn was recorded in 82% of cases. As well, 8% of patients sustained more than 60% of total burn surface area (TBSA) burns, 19% of patients needed mechanical ventilation, and 19% of burnt patients died either from wound sepsis, multi-organ failure or pulmonary embolism. The mean length of hospital stay was 24.91±25.08 days. The mean BOBI score was 1.07±1.27 and that of the FLAMES score was -4.76±2.92. The FLAMES score demonstrated an area under the receiver operating characteristic (ROC) curve of 0.95 which was significantly higher than that of the BOBI score (0.883). A statistically significant association was revealed between both predictive models and the outcome. The study concluded that both scoring systems were beneficial in predicting mortality in acutely burnt patients. However, the FLAMES score could be applied with a higher level of accuracy.

Keywords: BOBI, Burns, FLAMES, scoring systems, outcome.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1112
730 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS

Authors: E. Ramaraj, A. Padmapriya

Abstract:

In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.

Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940
729 An Experimental Study on the Effect of Operating Parameters during the Micro-Electro-Discharge Machining of Ni Based Alloy

Authors: Asma Perveen, M. P. Jahan

Abstract:

Ni alloys have managed to cover wide range of applications such as automotive industries, oil gas industries, and aerospace industries. However, these alloys impose challenges while using conventional machining technologies. On the other hand, Micro-Electro-Discharge machining (micro-EDM) is a non-conventional machining method that uses controlled sparks energy to remove material irrespective of the materials hardness. There has been always a huge interest from the industries for developing optimum methodology and parameters in order to enhance the productivity of micro-EDM in terms of reducing machining time and tool wear for different alloys. Therefore, the aims of this study are to investigate the effects of the micro-EDM process parameters, in order to find their optimal values. The input process parameters include voltage, capacitance, and electrode rotational speed, whereas the output parameters considered are machining time, entrance diameter of hole, overcut, tool wear, and crater size. The surface morphology and element characterization are also investigated with the use of SEM and EDX analysis. The experimental result indicates the reduction of machining time with the increment of discharge energy. Discharge energy also contributes to the enlargement of entrance diameter as well as overcut. In addition, tool wears show reduction with the increase of discharge energy. Moreover, crater size is found to be increased in size along with the increment of discharge energy.

Keywords: Micro EDM, Ni alloy, discharge energy, micro-holes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1301
728 Impacts of E-Learning on Educational Policy: Policy of Sensitization and Training in E-Learning in Saudi Arabia

Authors: Layla Albdr

Abstract:

Saudi Arabia instituted the policy of sensitizing and training stakeholders for e-learning and witnessed wide adoption in many institutions. However, it is at the infancy stage and needs time to develop to mirror the US and UK. The majority of the higher education institutions in Saudi Arabia have adopted e-learning as an alternative to traditional methods to advance education. Conversely, effective implementation of the policy of sensitization and training of stakeholders for e-learning implementation has not been attained because of various challenges. The objectives included determining the challenges and opportunities of the e-learning policy of sensitization and training of stakeholders in Saudi Arabia's higher education and examining if sensitization and training of stakeholder's policy will help promote the implementation of e-learning in institutions. The study employed a descriptive research design based on qualitative analysis. The researcher recruited 295 students and 60 academic staff from four Saudi Arabian universities to participate in the study. An online questionnaire was used to collect the data. The data were then analyzed and reported both quantitatively and qualitatively. The analysis provided an in-depth understanding of the opportunities and challenges of e-learning policy in Saudi Arabian universities. The main challenges identified as internal challenges were the lack of educators’ interest in adopting the policy, and external challenges entailed lack of ICT infrastructure and Internet connectivity. The study recommends encouraging, sensitizing, and training all stakeholders to address these challenges and adopt the policy.

Keywords: e-learning, educational policy, Saudi Arabian higher education, policy of sensitization and training

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 551
727 Increasing Fishery Economic Added Value through Post Fishing Program: Cold Storage Program

Authors: Indrijuli Magsari Putri, Dicky R. Munaf

Abstract:

The purpose of this paper is to guide the effort in improving the economic added value of Indonesian fisheries product through post fishing program, which is cold storage program. Indonesia's fisheries potential has been acknowledged by the world. FAO (2009) stated that Indonesia is one of the tenth highest producers of fishery products in the world. Based on BPS (Statistics Indonesia data), the national fisheries production in 2011 reached 5.714 million tons, which 93.55% came from marine fisheries and 6.45% from open waters. Indonesian territory consist of 2/3 of Indonesian waters, has given enormous benefits for Indonesia, especially fishermen. To improve the economic level of fishermen requires efforts to develop fisheries business unit. On of the efforts is by improving the quality of products which are marketed in the regional and international levels. It is certainly need the support of the existence of various fishery facilities (infrastructure to superstructure), one of which is cold storage. Given the many benefits of cold storage as a means of processing of fishery resources, Indonesia Maritime Security Coordinating Board (IMSCB) as one of the maritime institutions for maritime security and safety, has a program to empower the coastal community through encourages the development of cold storage in the middle and lower fishery business unit. The development of cold storage facilities which able to run its maximum role requires synergistic efforts of various parties.

Keywords: Cold Storage, Fish, Regulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2072
726 Streamflow Modeling for a Small Watershed Using Limited Hydrological Data

Authors: S. Chuenchooklin

Abstract:

This research was conducted in the Pua Watershed whereas located in the Upper Nan River Basin in Nan province, Thailand. Nan River basin originated in Nan province that comprises of many tributary streams to produce as inflow to the Sirikit dam provided huge reservoir with the storage capacity of 9510 million cubic meters. The common problems of most watersheds were found i.e. shortage water supply for consumption and agriculture utilizations, deteriorate of water quality, flood and landslide including debris flow, and unstable of riverbank. The Pua Watershed is one of several small river basins that flow through the Nan River Basin. The watershed includes 404 km2 representing the Pua District, the Upper Nan Basin, or the whole Nan River Basin, of 61.5%, 18.2% or 1.2% respectively. The Pua River is a main stream producing all year streamflow supplying the Pua District and an inflow to the Upper Nan Basin. Its length approximately 56.3 kilometers with an average slope of the channel by 1.9% measured. A diversion weir namely Pua weir bound the plain and mountainous areas with a very steep slope of the riverbed to 2.9% and drainage area of 149 km2 as upstream watershed while a mild slope of the riverbed to 0.2% found in a river reach of 20.3 km downstream of this weir, which considered as a gauged basin. However, the major branch streams of the Pua River are ungauged catchments namely: Nam Kwang and Nam Koon with the drainage area of 86 and 35 km2 respectively. These upstream watersheds produce runoff through the 3-streams downstream of Pua weir, Jao weir, and Kang weir, with an averaged annual runoff of 578 million cubic meters. They were analyzed using both statistical data at Pua weir and simulated data resulted from the hydrologic modeling system (HEC–HMS) which applied for the remaining ungauged basins. Since the Kwang and Koon catchments were limited with lack of hydrological data included streamflow and rainfall. Therefore, the mathematical modeling: HEC-HMS with the Snyder-s hydrograph synthesized and transposed methods were applied for those areas using calibrated hydrological parameters from the upstream of Pua weir with continuously daily recorded of streamflow and rainfall data during 2008-2011. The results showed that the simulated daily streamflow and sum up as annual runoff in 2008, 2010, and 2011 were fitted with observed annual runoff at Pua weir using the simple linear regression with the satisfied correlation R2 of 0.64, 062, and 0.59, respectively. The sensitivity of simulation results were come from difficulty using calibrated parameters i.e. lag-time, coefficient of peak flow, initial losses, uniform loss rates, and missing some daily observed data. These calibrated parameters were used to apply for the other 2-ungauged catchments and downstream catchments simulated.

Keywords: Streamflow, hydrological model, ungauged catchments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1951
725 Gluten-Free Cookies Enriched with Blueberry Pomace: Optimization of Baking Process

Authors: Aleksandra Mišan, Bojana Šarić, Nataša Nedeljković, Mladenka Pestorić, Pavle Jovanov, Milica Pojić, Jelena Tomić, Bojana Filipčev, Miroslav Hadnađev, Anamarija Mandić

Abstract:

With the aim of improving nutritional profile and antioxidant capacity of gluten-free cookies, blueberry pomace, by-product of juice production, was processed into a new food ingredient by drying and grinding and used for a gluten-free cookie formulation. Since the quality of a baked product is highly influenced by the baking conditions, the objective of this work was to optimize the baking time and thickness of dough pieces, by applying Response Surface Methodology (RSM) in order to obtain the best technological quality of the cookies. The experiments were carried out according to a Central Composite Design (CCD) by selecting the dough thickness and baking time as independent variables, while hardness, color parameters (L*, a* and b* values), water activity, diameter and short/long ratio were response variables. According to the results of RSM analysis, the baking time of 13.74min and dough thickness of 4.08mm was found to be the optimal for the baking temperature of 170°C. As similar optimal parameters were obtained by previously conducted experiment based on sensory analysis, response surface methodology (RSM) can be considered as a suitable approach to optimize the baking process.

Keywords: Baking process, blueberry pomace, gluten-free cookies, Response Surface Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2490
724 Determinants of Never Users of Contraception – Results from Pakistan Demographic and Health Survey 2012-13

Authors: Arsalan Jabbar, Wajiha Javed, Nelofer Mehboob, Zahid Memon

Abstract:

Introduction: There are multiple social, individual and cultural factors that influence an individual’s decision to adopt family planning methods especially among non-users in patriarchal societies like Pakistan. Non-users, if targeted efficiently, can contribute significantly to country’s CPR. A research study showed that nonusers if convinced to adopt lactational amenorrhea method can shift to long term methods in future. Research shows that if non users are targeted efficiently a 59% reduction in unintended pregnancies in Saharan Africa and South-Central and South-East Asia is anticipated. Methods: We did secondary data analysis on Pakistan Demographic Heath Survey (2012-13) dataset. Use of contraception (never-use/ever-use) was the outcome variable. At univariate level Chi-square/Fisher Exact test was used to assess relationship of baseline covariates with contraception use. Then variables to be incorporated in the model were checked for multicollinearity, confounding and interaction. Then binary logistic regression (with an urban-rural stratification) was done to find relationship between contraception use and baseline demographic and social variables. Results: The multivariate analyses of the study showed that younger women (≤ 29 years)were more prone to be never users as compared to those who were >30 years and this trend was seen in urban areas (AOR 1.92, CI 1.453-2.536) as well as rural areas (AOR 1.809, CI 1.421-2.303). While looking at regional variation, women from urban Sindh (AOR 1.548, CI 1.142-2.099) and urban Balochistan (AOR 2.403, CI 1.504-3.839) had more never users as compared to other urban regions. Women in the rich wealth quintile were more never users and this was seen both in urban and rural localities (urban (AOR 1.106 CI .753-1.624); rural areas (AOR 1.162, CI .887-1.524)) even though these were not statistically significant. Women idealizing more children (>4) are more never users as compared to those idealizing less children in both urban (AOR 1.854, CI 1.275-2.697) and rural areas (AOR 2.101, CI 1.514-2.916). Women who never lost a pregnancy were more inclined to be nonusers in rural areas (AOR 1.394, CI 1.127-1.723) .Women familiar with only traditional or no method had more never users in rural areas (AOR 1.717, CI 1.127-1.723) but in urban areas it wasn’t significant. Women unaware of Lady Health Worker’s presence in their area were more never users especially in rural areas (AOR 1.276, CI 1.014-1.607). Women who did not visit any care provider were more never users (urban (AOR 11.738, CI 9.112-15.121) rural areas (AOR 7.832, CI 6.243-9.826)). Discussion/Conclusion: This study concluded that government, policy makers and private sector family planning programs should focus on the untapped pool of never users (younger women from underserved provinces, in higher wealth quintiles, who desire more children.). We need to make sure to cover catchment areas where there are less LHWs and less providers as ignorance to modern methods and never been visited by an LHW are important determinants of never use. This all is in sync with previous literate from similar developing countries.

Keywords: Contraception, Demographic and Health Survey, Family Planning, Never users.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2124
723 Using SMS Mobile Technology to Assess the Mastery of Subject Content Knowledge of Science and Mathematics Teachers of Secondary Schools in Tanzania

Authors: Joel S. Mtebe, Aron Kondoro, Mussa M. Kissaka, Elia Kibga

Abstract:

Sub-Saharan Africa is described as the second fastest growing in mobile phone penetration in the world more than in the United States or the European Union. Mobile phones have been used to provide a lot of opportunities to improve people’s lives in the region such as in banking, marketing, entertainment, and paying for various bills such as water, TV, and electricity. However, the potential of mobile phones to enhance teaching and learning has not been explored. This study presents an experience of developing and delivering SMS based quiz questions used to assess mastery of subject content knowledge of science and mathematics secondary school teachers in Tanzania. The SMS quizzes were used as a follow up support mechanism to 500 teachers who participated in a project to upgrade subject content knowledge of teachers in science and mathematics subjects in Tanzania. Quizzes of 10-15 questions were sent to teachers each week for 8 weeks and the results were analyzed using SPSS. Results show that teachers who participated in chemistry and biology subjects have better performance compared to those who participated in mathematics and physics subjects. Teachers reported some challenges that led to poor performance, This research has several practical implications for those who are implementing or planning to use mobile phones in teaching and learning especially in rural secondary schools in sub-Saharan Africa.

Keywords: Mobile learning, e-learning, educational technologies, SMS, secondary education, assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020
722 Breast Cancer Survivability Prediction via Classifier Ensemble

Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia

Abstract:

This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.

Keywords: Classifier ensemble, breast cancer survivability, data mining, SEER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
721 Nuclear Medical Image Treatment System Based On FPGA in Real Time

Authors: B. Mahmoud, M.H. Bedoui, R. Raychev, H. Essabbah

Abstract:

We present in this paper an acquisition and treatment system designed for semi-analog Gamma-camera. It consists of a nuclear medical Image Acquisition, Treatment and Display chain(IATD) ensuring the acquisition, the treatment of the signals(resulting from the Gamma-camera detection head) and the scintigraphic image construction in real time. This chain is composed by an analog treatment board and a digital treatment board. We describe the designed systems and the digital treatment algorithms in which we have improved the performance and the flexibility. The digital treatment algorithms are implemented in a specific reprogrammable circuit FPGA (Field Programmable Gate Array).interface for semi-analog cameras of Sopha Medical Vision(SMVi) by taking as example SOPHY DS7. The developed system consists of an Image Acquisition, Treatment and Display (IATD) ensuring the acquisition and the treatment of the signals resulting from the DH. The developed chain is formed by a treatment analog board and a digital treatment board designed around a DSP [2]. In this paper we have presented the architecture of a new version of our chain IATD in which the integration of the treatment algorithms is executed on an FPGA (Field Programmable Gate Array)

Keywords: Nuclear medical image, scintigraphic image, digitaltreatment, linearity, spectrometry, FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
720 Pipelined Control-Path Effects on Area and Performance of a Wormhole-Switched Network-on-Chip

Authors: Faizal A. Samman, Thomas Hollstein, Manfred Glesner

Abstract:

This paper presents design trade-off and performance impacts of the amount of pipeline phase of control path signals in a wormhole-switched network-on-chip (NoC). The numbers of the pipeline phase of the control path vary between two- and one-cycle pipeline phase. The control paths consist of the routing request paths for output selection and the arbitration paths for input selection. Data communications between on-chip routers are implemented synchronously and for quality of service, the inter-router data transports are controlled by using a link-level congestion control to avoid lose of data because of an overflow. The trade-off between the area (logic cell area) and the performance (bandwidth gain) of two proposed NoC router microarchitectures are presented in this paper. The performance evaluation is made by using a traffic scenario with different number of workloads under 2D mesh NoC topology using a static routing algorithm. By using a 130-nm CMOS standard-cell technology, our NoC routers can be clocked at 1 GHz, resulting in a high speed network link and high router bandwidth capacity of about 320 Gbit/s. Based on our experiments, the amount of control path pipeline stages gives more significant impact on the NoC performance than the impact on the logic area of the NoC router.

Keywords: Network-on-Chip, Synchronous Parallel Pipeline, Router Architecture, Wormhole Switching

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1445
719 Object Negotiation Mechanism for an Intelligent Environment Using Event Agents

Authors: Chiung-Hui Chen

Abstract:

With advancements in science and technology, the concept of the Internet of Things (IoT) has gradually developed. The development of the intelligent environment adds intelligence to objects in the living space by using the IoT. In the smart environment, when multiple users share the living space, if different service requirements from different users arise, then the context-aware system will have conflicting situations for making decisions about providing services. Therefore, the purpose of establishing a communication and negotiation mechanism among objects in the intelligent environment is to resolve those service conflicts among users. This study proposes developing a decision-making methodology that uses “Event Agents” as its core. When the sensor system receives information, it evaluates a user’s current events and conditions; analyses object, location, time, and environmental information; calculates the priority of the object; and provides the user services based on the event. Moreover, when the event is not single but overlaps with another, conflicts arise. This study adopts the “Multiple Events Correlation Matrix” in order to calculate the degree values of incidents and support values for each object. The matrix uses these values as the basis for making inferences for system service, and to further determine appropriate services when there is a conflict.

Keywords: Internet of things, intelligent object, event agents, negotiation mechanism, degree of similarity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1161
718 Communication Styles of Business Students: A Comparison of Four National Cultures

Authors: Tiina Brandt, Isaac Wanasika

Abstract:

Culturally diverse global companies need to understand cultural differences between leaders and employees from different backgrounds. Communication is culturally contingent and has a significant impact on effective execution of leadership goals. The awareness of cultural variations related to communication and interactions will help leaders modify their own behavior, and consequently improve the execution of goals and avoid unnecessary faux pas. Our focus is on young adults that have experienced cultural integration, culturally diverse surroundings in schools and universities, and cultural travels. Our central research problem is to understand the impact of different national cultures on communication. We focus on four countries with distinct national cultures and spatial distribution. The countries are Finland, Indonesia, Russia and USA. Our sample is based on business students (n = 225) from various backgrounds in the four countries. Their responses of communication and leadership styles were analyzed using ANOVA and post-hoc test. Results indicate that culture impacts on communication behavior. Even young culturally-exposed adults with cultural awareness and experience demonstrate cultural differences in their behavior. Apparently, culture is a deeply seated trait that cannot be completely neutralized by environmental variables. Our study offers valuable input for leadership training programs and for expatriates when recognizing specific differences on leaders’ behavior due to culture.

Keywords: Culture, communication, Finland, Indonesia, Russia, USA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596
717 Indoor and Outdoor Concentration of Particulate Matter at Domestic Homes

Authors: B. Karakas, S. Lakestani, C. Guler, B. Guciz Dogan, S. Acar Vaizoglu, A. Taner, B. Sekerel, R. Tıpırdamaz, G. Gullu

Abstract:

Particulate matter (PM) in ambient air is responsible for adverse health effects in adults and children. Relatively little is known about the concentrations, sources and health effects of PM in indoor air. A monitoring study was conducted in Ankara by three campaigns in order to measure PM levels in indoor and outdoor environments to identify and quantify associations between sources and concentrations. Approximately 82 homes (1st campaign for 42, 2nd campaign for 12, and 3rd campaign for 28), three rooms (living room, baby-s room and living room used as a baby-s room) and outdoor ambient at each home were sampled with Grimm Environmental Dust Monitoring (EDM) 107, during different seasonal periods of 2011 and 2012. In this study, the relationship between indoor and outdoor PM levels for particulate matter less than 10 micrometer (.m) (PM10), particulate matter less than 2.5.m (PM2.5) and particulate matter less than 1.0.m (PM1) were investigated. The mean concentration of PM10, PM2.5, and PM1.0 at living room used as baby-s room is higher than living and baby-s room (or bedroom) for three sampling campaigns. It is concluded that the household activities and environmental conditions are very important for PM concentrations in the indoor environments during the sampling periods. The amount of smokers, being near a main street and/or construction activities increased the PM concentration. This study is based on the assessment the relationship between indoor and outdoor PM levels and the household activities and environmental conditions

Keywords: Indoor air quality, particulate matter (PM), PM10, PM2.5, PM1.0.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3194
716 A Dynamic Mechanical Thermal T-Peel Test Approach to Characterize Interfacial Behavior of Polymeric Textile Composites

Authors: J. R. Büttler, T. Pham

Abstract:

Basic understanding of interfacial mechanisms is of importance for the development of polymer composites. For this purpose, we need techniques to analyze the quality of interphases, their chemical and physical interactions and their strength and fracture resistance. In order to investigate the interfacial phenomena in detail, advanced characterization techniques are favorable. Dynamic mechanical thermal analysis (DMTA) using a rheological system is a sensitive tool. T-peel tests were performed with this system, to investigate the temperature-dependent peel behavior of woven textile composites. A model system was made of polyamide (PA) woven fabric laminated with films of polypropylene (PP) or PP modified by grafting with maleic anhydride (PP-g-MAH). Firstly, control measurements were performed with solely PP matrixes. Polymer melt investigations, as well as the extensional stress, extensional viscosity and extensional relaxation modulus at -10°C, 100 °C and 170 °C, demonstrate similar viscoelastic behavior for films made of PP-g-MAH and its non-modified PP-control. Frequency sweeps have shown that PP-g-MAH has a zero phase viscosity of around 1600 Pa·s and PP-control has a similar zero phase viscosity of 1345 Pa·s. Also, the gelation points are similar at 2.42*104 Pa (118 rad/s) and 2.81*104 Pa (161 rad/s) for PP-control and PP-g-MAH, respectively. Secondly, the textile composite was analyzed. The extensional stress of PA66 fabric laminated with either PP-control or PP-g-MAH at -10 °C, 25 °C and 170 °C for strain rates of 0.001 – 1 s-1 was investigated. The laminates containing the modified PP need more stress for T-peeling. However, the strengthening effect due to the modification decreases by increasing temperature and at 170 °C, just above the melting temperature of the matrix, the difference disappears. Independent of the matrix used in the textile composite, there is a decrease of extensional stress by increasing temperature. It appears that the more viscous is the matrix, the weaker the laminar adhesion. Possibly, the measurement is influenced by the fact that the laminate becomes stiffer at lower temperatures. Adhesive lap-shear testing at room temperature supports the findings obtained with the T-peel test. Additional analysis of the textile composite at the microscopic level ensures that the fibers are well embedded in the matrix. Atomic force microscopy (AFM) imaging of a cross section of the composite shows no gaps between the fibers and matrix. Measurements of the water contact angle show that the MAH grafted PP is more polar than the virgin-PP, and that suggests a more favorable chemical interaction of PP-g-MAH with PA, compared to the non-modified PP. In fact, this study indicates that T-peel testing by DMTA is a technique to achieve more insights into polymeric textile composites.

Keywords: Dynamic mechanical thermal analysis, interphase, polyamide, polypropylene, textile composite, T-peel test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 682