Search results for: Minimum distance jet impingement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1738

Search results for: Minimum distance jet impingement

178 The Design of a Vehicle Traffic Flow Prediction Model for a Gauteng Freeway Based on an Ensemble of Multi-Layer Perceptron

Authors: Tebogo Emma Makaba, Barnabas Ndlovu Gatsheni

Abstract:

The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.

Keywords: Bagging ensemble methods, confusion matrix, multi-layer perceptron, vehicle traffic flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
177 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images

Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn

Abstract:

The detection and segmentation of mitochondria from fluorescence microscopy is crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. Although there exists a number of open-source software tools and artificial intelligence (AI) methods designed for analyzing mitochondrial images, the availability of only a few combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compactibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source Python and OpenCV library, the algorithms are implemented in three stages: pre-processing; image binarization; and coarse-to-fine segmentation. The proposed model is validated using the fluorescence mitochondrial dataset. Ground truth labels generated using Labkit were also used to evaluate the performance of our detection and segmentation model using precision, recall and rand index. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks concludes the paper.

Keywords: 2D, Binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 379
176 Morphological and Electrical Characterization of Polyacrylonitrile Nanofibers Synthesized Using Electrospinning Method for Electrical Application

Authors: Divyanka Sontakke, Arpit Thakre, D. K Shinde, Sujata Parmeshwaran

Abstract:

Electrospinning is the most widely utilized method to create nanofibers because of the direct setup, the capacity to mass-deliver consistent nanofibers from different polymers, and the ability to produce ultrathin fibers with controllable diameters. Smooth and much arranged ultrafine Polyacrylonitrile (PAN) nanofibers with diameters going from submicron to nanometer were delivered utilizing Electrospinning technique. PAN powder was used as a precursor to prepare the solution utilized as a part of this process. At the point when the electrostatic repulsion contradicted surface tension, a charged stream of polymer solution was shot out from the head of the spinneret and along these lines ultrathin nonwoven fibers were created. The effect of electrospinning parameter such as applied voltage, feed rate, concentration of polymer solution and tip to collector distance on the morphology of electrospun PAN nanofibers were investigated. The nanofibers were heat treated for carbonization to examine the changes in properties and composition to make for electrical application. Scanning Electron Microscopy (SEM) was performed before and after carbonization to study electrical conductivity and morphological characterization. The SEM images have shown the uniform fiber diameter and no beads formation. The average diameter of the PAN fiber observed 365nm and 280nm for flat plat and rotating drum collector respectively. The four probe strategy was utilized to inspect the electrical conductivity of the nanofibers and the electrical conductivity is significantly improved with increase in oxidation temperature exposed.

Keywords: Electrospinning, polyacrylonitrile carbon nanofibres, heat treatment, electrical conductivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 653
175 Novel Adaptive Channel Equalization Algorithms by Statistical Sampling

Authors: János Levendovszky, András Oláh

Abstract:

In this paper, novel statistical sampling based equalization techniques and CNN based detection are proposed to increase the spectral efficiency of multiuser communication systems over fading channels. Multiuser communication combined with selective fading can result in interferences which severely deteriorate the quality of service in wireless data transmission (e.g. CDMA in mobile communication). The paper introduces new equalization methods to combat interferences by minimizing the Bit Error Rate (BER) as a function of the equalizer coefficients. This provides higher performance than the traditional Minimum Mean Square Error equalization. Since the calculation of BER as a function of the equalizer coefficients is of exponential complexity, statistical sampling methods are proposed to approximate the gradient which yields fast equalization and superior performance to the traditional algorithms. Efficient estimation of the gradient is achieved by using stratified sampling and the Li-Silvester bounds. A simple mechanism is derived to identify the dominant samples in real-time, for the sake of efficient estimation. The equalizer weights are adapted recursively by minimizing the estimated BER. The near-optimal performance of the new algorithms is also demonstrated by extensive simulations. The paper has also developed a (Cellular Neural Network) CNN based approach to detection. In this case fast quadratic optimization has been carried out by t, whereas the task of equalizer is to ensure the required template structure (sparseness) for the CNN. The performance of the method has also been analyzed by simulations.

Keywords: Cellular Neural Network, channel equalization, communication over fading channels, multiuser communication, spectral efficiency, statistical sampling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
174 Thermal Treatments and Characteristics Study On Unalloyed Structural (AISI 1140) Steel

Authors: S. S. Sharma, P. R. Prabhu, Rajagopal Chadaga

Abstract:

The main emphasis of metallurgists has been to process the materials to obtain the balanced mechanical properties for the given application. One of the processing routes to alter the properties is heat treatment. Nearly 90% of the structural applications are related to the medium carbon an alloyed steels and hence are regarded as structural steels. The major requirement in the conventional steel is to improve workability, toughness, hardness and grain refinement. In this view, it is proposed to study the mechanical and tribological properties of unalloyed structural (AISI 1140) steel with different thermal (heat) treatments like annealing, normalizing, tempering and hardening and compared with as brought (cold worked) specimen. All heat treatments are carried out in atmospheric condition. Hardening treatment improves hardness of the material, a marginal decrease in hardness value with improved ductility is observed in tempering. Annealing and normalizing improve ductility of the specimen. Normalized specimen shows ultimate ductility. Hardened specimen shows highest wear resistance in the initial period of slide wear where as above 25KM of sliding distance, as brought steel dominates the hardened specimen. Both mild and severe wear regions are observed. Microstructural analysis shows the existence of pearlitic structure in normalized specimen, lath martensitic structure in hardened, pearlitic, ferritic structure in annealed specimen.

Keywords: Annealing, hardness, heat treatment, normalizing, wear.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2080
173 Evaluating Probable Bending of Frames for Near-Field and Far-Field Records

Authors: Majid Saaly, Shahriar Tavousi Tafreshi, Mehdi Nazari Afshar

Abstract:

Most reinforced concrete structures are designed only under heavy loads have large transverse reinforcement spacing values, and therefore suffer severe failure after intense ground movements. The main goal of this paper is to compare the shear- and axial failure of concrete bending frames available in Tehran using Incremental Dynamic Analysis (IDA) under near- and far-field records. For this purpose, IDA of 5, 10, and 15-story concrete structures were done under seven far-fault records and five near-faults records. The results show that in two-dimensional models of short-rise, mid-rise and high-rise reinforced concrete frames located on Type-3 soil, increasing the distance of the transverse reinforcement can increase the maximum inter-story drift ratio values up to 37%. According to the existing results on 5, 10, and 15-story reinforced concrete models located on Type-3 soil, records with characteristics such as fling-step and directivity create maximum drift values between floors more than far-fault earthquakes. The results indicated that in the case of seismic excitation modes under earthquake encompassing directivity or fling-step, the probability values of failure and failure possibility increasing rate values are much smaller than the corresponding values of far-fault earthquakes. However, in near-fault frame records, the probability of exceedance occurs at lower seismic intensities compared to far-fault records.

Keywords: Directivity, fling-step, fragility curve, IDA, inter story drift ratio.v

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 323
172 Influence of Compactive Efforts on Cement- Bagasse Ash Treatment of Expansive Black Cotton Soil

Authors: Moses, G, Osinubi, K. J.

Abstract:

A laboratory study on the influence of compactive effort on expansive black cotton specimens treated with up to 8% ordinary Portland cement (OPC) admixed with up to 8% bagasse ash (BA) by dry weight of soil and compacted using the energies of the standard Proctor (SP), West African Standard (WAS) or “intermediate” and modified Proctor (MP) were undertaken. The expansive black cotton soil was classified as A-7-6 (16) or CL using the American Association of Highway and Transportation Officials (AASHTO) and Unified Soil Classification System (USCS), respectively. The 7day unconfined compressive strength (UCS) values of the natural soil for SP, WAS and MP compactive efforts are 286, 401 and 515kN/m2 respectively, while peak values of 1019, 1328 and 1420kN/m2 recorded at 8% OPC/ 6% BA, 8% OPC/ 2% BA and 6% OPC/ 4% BA treatments, respectively were less than the UCS value of 1710kN/m2 conventionally used as criterion for adequate cement stabilization. The soaked California bearing ratio (CBR) values of the OPC/BA stabilized soil increased with higher energy level from 2, 4 and 10% for the natural soil to Peak values of 55, 18 and 8% were recorded at 8% OPC/4% BA 8% OPC/2% BA and 8% OPC/4% BA, treatments when SP, WAS and MP compactive effort were used, respectively. The durability of specimens was determined by immersion in water. Soils treatment at 8% OPC/ 4% BA blend gave a value of 50% resistance to loss in strength value which is acceptable because of the harsh test condition of 7 days soaking period specimens were subjected instead of the 4 days soaking period that specified a minimum resistance to loss in strength of 80%. Finally An optimal blend of is 8% OPC/ 4% BA is recommended for treatment of expansive black cotton soil for use as a sub-base material.

Keywords: Bagasse ash, California bearing ratio, Compaction, Durability, Ordinary Portland cement, Unconfined compressive strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3526
171 Information Retrieval in Domain Specific Search Engine with Machine Learning Approaches

Authors: Shilpy Sharma

Abstract:

As the web continues to grow exponentially, the idea of crawling the entire web on a regular basis becomes less and less feasible, so the need to include information on specific domain, domain-specific search engines was proposed. As more information becomes available on the World Wide Web, it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest) [2]. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves. In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated. This paper describes the use of semi-structured machine learning approach with Active learning for the “Domain Specific Search Engines". A domain-specific search engine is “An information access system that allows access to all the information on the web that is relevant to a particular domain. The proposed work shows that with the help of this approach relevant data can be extracted with the minimum queries fired by the user. It requires small number of labeled data and pool of unlabelled data on which the learning algorithm is applied to extract the required data.

Keywords: Search engines; machine learning, Informationretrieval, Active logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2056
170 Exploring the Safety of Sodium Glucose Co-Transporter-2 Inhibitors at the Imperial College London Diabetes Centre, UAE

Authors: Raad Nari, Maura Moriaty, Maha T. Barakat

Abstract:

Introduction: Sodium-glucose co-transporter-2 (SGLT2) inhibitors are a new class of oral anti-diabetic drugs with a unique mechanism of action. They are used to improve glycaemic control in adults with type 2 diabetes by enhancing urinary glucose excretion. In the UAE, there has been certainly an increased use of these medications. As with any new medication, there are safety considerations related to their use in patients with type two diabetes. A retrospective study was conducted at the three main centres of the Imperial College London Diabetes Centre. Methodology: All patients in electronic database (Diamond) from October 2014 to October 2017 were included with a minimum of six months usage of sodium glucose co-transporter inhibitors that comprise canagliflozin, dapagliflozin and empagliflozin. There were 15 paired sample biochemical and clinical correlations. The analysis was done at the start of the study, three months and six months apart. SPSS version 24 was used for this study. Conclusion: This study of sodium glucose co-transporter-2 inhibitors used showed significant reductions in weight, glycated haemoglobin A1C, systolic and diastolic blood pressures. As the case with systematic reviews, there were similar changes in liver enzymes, raised total cholesterol, low density lipopoptein and high density lipoprotein. There was slight improvement in estimated glomerular filtration rate too. Our analysis also showed that they increased in the incidence of urinary tract symptoms and incidence of urinary tract infections.

Keywords: SGLT2 inhibitors dapagliflozin empagliflozin canagliflozin, adverse effects, amputation diabetic ketoacidosis DKA, urinary tract infection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 684
169 Influence of Combined Drill Coulters on Seedbed Compaction under Conservation Tillage Technologies

Authors: E. Šarauskis, L. Masilionyte, Z. Kriaučiūniene, K. Romaneckas

Abstract:

All over the world, including the Middle and East European countries, sustainable tillage and sowing technologies are applied increasingly broadly with a view to optimising soil resources, mitigating soil degradation processes, saving energy resources, preserving biological diversity, etc. As a result, altered conditions of tillage and sowing technological processes are faced inevitably. The purpose of this study is to determine the seedbed topsoil hardness when using a combined sowing coulter in different sustainable tillage technologies. The research involved a combined coulter consisting of two dissected blade discs and a shoe coulter. In order to determine soil hardness at the seedbed area, a multipenetrometer was used. It was found by experimental studies that in loosened soil, a combined sowing coulter equally suppresses the furrow bottom, walls and soil near the furrow; therefore, here, soil hardness was similar at all researched depths and no significant differences were established. In loosened and compacted (double-rolled) soil, the impact of a combined coulter on the hardness of seedbed soil surface was more considerable at a depth of 2 mm. Soil hardness at the furrow bottom and walls to a distance of up to 26 mm was 1.1 MPa. At a depth of 10 mm, the greatest hardness was established at the furrow bottom. In loosened and heavily compacted (rolled for 6 times) soil, at a depth of 2 and 10 mm a combined coulter most of all compacted the furrow bottom, which has a hardness of 1.8 MPa. At a depth of 20 mm, soil hardness within the whole investigated area varied insignificantly and fluctuated by around 2.0 MPa. The hardness of furrow walls and soil near the furrow was by approximately 1.0 MPa lower than that at the furrow bottom

Keywords: Coulters design, seedbed, soil hardness, combined coulters, soil compaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375
168 The Effects of Shot and Grit Blasting Process Parameters on Steel Pipes Coating Adhesion

Authors: Saeed Khorasanizadeh

Abstract:

Adhesion strength of exterior or interior coating of steel pipes is too important. Increasing of coating adhesion on surfaces can increase the life time of coating, safety factor of transmitting line pipe and decreasing the rate of corrosion and costs. Preparation of steel pipe surfaces before doing the coating process is done by shot and grit blasting. This is a mechanical way to do it. Some effective parameters on that process, are particle size of abrasives, distance to surface, rate of abrasive flow, abrasive physical properties, shapes, selection of abrasive, kind of machine and its power, standard of surface cleanness degree, roughness, time of blasting and weather humidity. This search intended to find some better conditions which improve the surface preparation, adhesion strength and corrosion resistance of coating. So, this paper has studied the effect of varying abrasive flow rate, changing the abrasive particle size, time of surface blasting on steel surface roughness and over blasting on it by using the centrifugal blasting machine. After preparation of numbers of steel samples (according to API 5L X52) and applying epoxy powder coating on them, to compare strength adhesion of coating by Pull-Off test. The results have shown that, increasing the abrasive particles size and flow rate, can increase the steel surface roughness and coating adhesion strength but increasing the blasting time can do surface over blasting and increasing surface temperature and hardness too, change, decreasing steel surface roughness and coating adhesion strength.

Keywords: surface preparation, abrasive particles, adhesionstrength

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9036
167 Tom Stoppard: The Amorality of the Artist

Authors: Majeed Mohammed Midhin, Clare Finburgh

Abstract:

To maintain a healthy balanced loyalty, whether to art or society, posits a debatable issue. The artist is always on the look out for the potential tension between those two realms. Therefore, one of the most painful dilemmas the artist finds is how to function in a society without sacrificing the aesthetic values of his/her work. In other words, the life-long awareness of failure which derives from the concept of the artist as caught between unflattering social realities and the need to invent genuine art forms becomes a fertilizing soil for the artists to be tackled. Thus, within the framework of this dilemma, the question of the responsibility of the artist and the relationship of the art to politics will be illuminating. To a larger extent, however, in drama, this dilemma is represented by the fictional characters of the play. The present paper tackles the idea of the amorality of the artist in selected plays by Tom Stoppard. However, Stoppard’s awareness of his situation as a refugee has led him to keep at a distance from politics. He tried hard to avoid any intervention into the realms of political debate, especially in his earliest work. On the one hand, it is not meant that he did not interest in politics as such, but rather he preferred to question it than to create a fixed ideological position. On the other hand, Stoppard’s refusal to intervene in politics is ascribed to his feeling of gratitude to Britain where he settled. As a result, Stoppard has frequently been criticized for a lack of political engagement and also for not leaning too much for the left when he does engage. His reaction to these public criticisms finds expression in his self-conscious statements which defensively stressed the artifice of his work. He, like Oscar Wilde thinks that the responsibility of the artist is devoted to the realm of his/her art. Consequently, his consciousness for the role of the artist is truly reflected in his two plays, Artist Descending a Staircase (1972) and Travesties (1974).

Keywords: Amorality, responsibility, politics, ideology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679
166 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 957
165 The Portuguese Framework of the Professional Internship without Public Funds

Authors: Ana Lambelho

Abstract:

In an economic crisis such as the one that shook (and still shake) Europe, one does not question the importance of the measures that encourage the hiring and integration of young people into the labour market. In the mentioned context, enterprises tend to reduce the cost of labour and to seek flexible contracting instruments. The professional internships allow innovation and creativity at low cost, because, as they are not labour contracts, the enterprises do not have to respect the minimum standards related to wages, working time duration and so on. In Portugal, we observe a widespread existence of training contracts in which the trainee worked several hours without salary or was paid below the legally prescribed for the function and the work period. For this reason, under the tripartite agreement for a new system of regulation of labour relations, employment policies and social protection, between the Government and the social partners, in June 2008, foresaw a prohibition of professional internships unpaid and the legal regulation of the mandatory internships for access to an activity. The first Act about private internship contracts, i.e., internships without public funding was embodied in the Decree-Law N. 66/2011, of 1st June. This work is dedicated to the study of the legal regime of the internship contract in Portugal, by analysing the problems brought by the new set of rules and especially those which remains unresolved. In fact, we can conclude that the number of situations covered by the Act is much lower than what was expected, because of the exclusion of the mandatory internship for access to a profession when the activity is developed autonomously. Since the majority of the activities can be developed both autonomously or subordinated, it is quite easy to out of the Act requirements and, so, out of the protection that it confers to the intern. In order to complete this study, we considered not only the mentioned legal Act, but also the few doctrine and jurisprudence about the theme.

Keywords: Intern, internship contact, labour law, Portugal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1777
164 Response Time Behavior Trends of Proptional, Propotional Integral and Proportional Integral Derivative Mode on Lab Scale

Authors: Syed Zohaib Javaid Zaidi, W. Iqbal

Abstract:

The industrial automation is dependent upon pneumatic control systems. The industrial units are now controlled with digital control systems to tackle the process variables like Temperature, Pressure, Flow rates and Composition.

This research work produces an evaluation of the response time fluctuations for proportional mode, proportional integral and proportional integral derivative modes of automated chemical process control. The controller output is measured for different values of gain with respect to time in three modes (P, PI and PID). In case of P-mode for different values of gain the controller output has negligible change. When the controller output of PI-mode is checked for constant gain, it can be seen that by decreasing the integral time the controller output has showed more fluctuations. The PID mode results have found to be more interesting in a way that when rate minute has changed, the controller output has also showed fluctuations with respect to time.  The controller output for integral mode and derivative mode are observed with lesser steady state error, minimum offset and larger response time to control the process variable.   The tuning parameters in case of P-mode are only steady state gain with greater errors with respect to controller output. The integral mode showed controller outputs with intermediate responses during integral gain (ki).  By increasing the rate minute the derivative gain (kd) also increased which showed the controlled oscillations in case of PID mode and lesser overshoot.

Keywords: Controller Output, P, PI &PID modes, Steady state gain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5552
163 Simplified Empirical Method for Predicting Liquefaction Potential and Its Application to Kaohsiung Areas in Taiwan

Authors: Darn H. Hsiao, Zhu-Yun Zheng

Abstract:

Since Taiwan is located between the Eurasian and Filipino plates and earthquakes often thus occur. The coastal plains in western Taiwan are alluvial plains, and the soils of the alluvium are mostly from the Lao-Shan belt in the central mountainous area of ​​southern Taiwan. It could come mostly from sand/shale and slate. The previous investigation found that the soils in the Kaohsiung area of ​​southern Taiwan are mainly composed of slate, shale, quartz, low-plastic clay, silt, silty sand and so on. It can also be found from the past earthquakes that the soil in Kaohsiung is highly susceptible to soil subsidence due to liquefaction. Insufficient bearing capacity of building will cause soil liquefaction disasters. In this study, the boring drilling data from nine districts among the Love River Basin in the city center, and some factors affecting liquefaction include the content of fines (FC), standard penetration test N value (SPT N), the thickness of clay layer near ground-surface, and the thickness of possible liquefied soil were further discussed for liquefaction potential as well as groundwater level. The results show that the liquefaction potential is higher in the areas near the riverside, the backfill area, and the west area of ​​the study area. This paper also uses the old paleo-geological map, soil particle distribution curve, compared with LPI map calculated from the analysis results. After all the parameters finally were studied for five sub zones in the Love River Basin by maximum-minimum method, it is found that both of standard penetration test N value and the thickness of the clay layer will be most influential.

Keywords: Liquefaction, western Taiwan, liquefaction potential map, factors influence high liquefaction potential areas, LPI analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 668
162 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768
161 An Intelligent Cascaded Fuzzy Logic Based Controller for Controlling the Room Temperature in Hydronic Heating System

Authors: Vikram Jeganathan, A. V. Sai Balasubramanian, N. Ravi Shankar, S. Subbaraman, R. Rengaraj

Abstract:

Heating systems are a necessity for regions which brace extreme cold weather throughout the year. To maintain a comfortable temperature inside a given place, heating systems making use of- Hydronic boilers- are used. The principle of a single pipe system serves as a base for their working. It is mandatory for these heating systems to control the room temperature, thus maintaining a warm environment. In this paper, the concept of regulation of the room temperature over a wide range is established by using an Adaptive Fuzzy Controller (AFC). This fuzzy controller automatically detects the changes in the outside temperatures and correspondingly maintains the inside temperature to a palatial value. Two separate AFC's are put to use to carry out this function: one to determine the quantity of heat needed to reach the prospective temperature required and to set the desired temperature; the other to control the position of the valve, which is directly proportional to the error between the present room temperature and the user desired temperature. The fuzzy logic controls the position of the valve as per the requirement of the heat. The amount by which the valve opens or closes is controlled by 5 knob positions, which vary from minimum to maximum, thereby regulating the amount of heat flowing through the valve. For the given test system data, different de-fuzzifier methods have been implemented and the results are compared. In order to validate the effectiveness of the proposed approach, a fuzzy controller has been designed by obtaining a test data from a real time system. The simulations are performed in MATLAB and are verified with standard system data. The proposed approach can be implemented for real time applications.

Keywords: Adaptive fuzzy controller, Hydronic heating system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1938
160 Fault Classification of Double Circuit Transmission Line Using Artificial Neural Network

Authors: Anamika Jain, A. S. Thoke, R. N. Patel

Abstract:

This paper addresses the problems encountered by conventional distance relays when protecting double-circuit transmission lines. The problems arise principally as a result of the mutual coupling between the two circuits under different fault conditions; this mutual coupling is highly nonlinear in nature. An adaptive protection scheme is proposed for such lines based on application of artificial neural network (ANN). ANN has the ability to classify the nonlinear relationship between measured signals by identifying different patterns of the associated signals. One of the key points of the present work is that only current signals measured at local end have been used to detect and classify the faults in the double circuit transmission line with double end infeed. The adaptive protection scheme is tested under a specific fault type, but varying fault location, fault resistance, fault inception angle and with remote end infeed. An improved performance is experienced once the neural network is trained adequately, which performs precisely when faced with different system parameters and conditions. The entire test results clearly show that the fault is detected and classified within a quarter cycle; thus the proposed adaptive protection technique is well suited for double circuit transmission line fault detection & classification. Results of performance studies show that the proposed neural network-based module can improve the performance of conventional fault selection algorithms.

Keywords: Double circuit transmission line, Fault detection and classification, High impedance fault and Artificial Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3144
159 Plasma Spraying of 316 Stainless Steel on Aluminum and Investigation of Coat/Substrate Interface

Authors: P. Abachi, T. W. Coyle, P. S. Musavi Gharavi

Abstract:

By applying coating onto a structural component, the corrosion and/or wear resistance requirements of the surface can be fulfilled. Since the layer adhesion of the coating influences the mechanical integrity of the coat/substrate interface during the service time, it should be examined accurately. At the present work, the tensile bonding strength of the 316 stainless steel plasma sprayed coating on aluminum substrate was determined by using tensile adhesion test, TAT, specimen. The interfacial fracture toughness was specified using four-point bend specimen containing a saw notch and modified chevron-notched short-bar (SB) specimen. The coating microstructure and fractured specimen surface were examined by using scanning electron- and optical-microscopy. The investigation of coated surface after tensile adhesion test indicates that the failure mechanism is mostly cohesive and rarely adhesive type. The calculated value of critical strain energy release rate proposes relatively good interface status. It seems that four-point bending test offers a potentially more sensitive means for evaluation of mechanical integrity of coating/substrate interfaces than is possible with the tensile test. The fracture toughness value reported for the modified chevron-notched short-bar specimen testing cannot be taken as absolute value because its calculation is based on the minimum stress intensity coefficient value which has been suggested for the fracture toughness determination of homogeneous parts in the ASTM E1304-97 standard. 

Keywords: Bonding strength, four-point bend test, interfacial fracture toughness, modified chevron-notched short-bar specimen, plasma sprayed coating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
158 Impact of Vehicle Travel Characteristics on Level of Service: A Comparative Analysis of Rural and Urban Freeways

Authors: Anwaar Ahmed, Muhammad Bilal Khurshid, Samuel Labi

Abstract:

The effect of trucks on the level of service is determined by considering passenger car equivalents (PCE) of trucks. The current version of Highway Capacity Manual (HCM) uses a single PCE value for all tucks combined. However, the composition of truck traffic varies from location to location; therefore, a single PCE value for all trucks may not correctly represent the impact of truck traffic at specific locations. Consequently, present study developed separate PCE values for single-unit and combination trucks to replace the single value provided in the HCM on different freeways. Site specific PCE values, were developed using concept of spatial lagging headways (that is the distance between rear bumpers of two vehicles in a traffic stream) measured from field traffic data. The study used data from four locations on a single urban freeway and three different rural freeways in Indiana. Three-stage-leastsquares (3SLS) regression techniques were used to generate models that predicted lagging headways for passenger cars, single unit trucks (SUT), and combination trucks (CT). The estimated PCE values for single-unit and combination truck for basic urban freeways (level terrain) were: 1.35 and 1.60, respectively. For rural freeways the estimated PCE values for single-unit and combination truck were: 1.30 and 1.45, respectively. As expected, traffic variables such as vehicle flow rates and speed have significant impacts on vehicle headways. Study results revealed that the use of separate PCE values for different truck classes can have significant influence on the LOS estimation.

Keywords: Level of Service, Capacity Analysis, Lagging Headway.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2057
157 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body

Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi

Abstract:

The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.

Keywords: Accu-Chek, diabetes, neural network, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
156 FT-NIR Method to Determine Moisture in Gluten Free Rice Based Pasta during Drying

Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra

Abstract:

Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.

Keywords: FT-NIR, Pasta, moisture determination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2780
155 Evaluation and Analysis of Lean-Based Manufacturing Equipment and Technology System for Jordanian Industries

Authors: Mohammad D. AL-Tahat, Shahnaz M. Alkhalil

Abstract:

International markets driven forces are changing continuously, therefore companies need to gain a competitive edge in such markets. Improving the company's products, processes and practices is no longer auxiliary. Lean production is a production management philosophy that consolidates work tasks with minimum waste resulting in improved productivity. Lean production practices can be mapped into many production areas. One of these is Manufacturing Equipment and Technology (MET). Many lean production practices can be implemented in MET, namely, specific equipment configurations, total preventive maintenance, visual control, new equipment/ technologies, production process reengineering and shared vision of perfection.The purpose of this paper is to investigate the implementation level of these six practices in Jordanian industries. To achieve that a questionnaire survey has been designed according to five-point Likert scale. The questionnaire is validated through pilot study and through experts review. A sample of 350 Jordanian companies were surveyed, the response rate was 83%. The respondents were asked to rate the extent of implementation for each of practices. A relationship conceptual model is developed, hypotheses are proposed, and consequently the essential statistical analyses are then performed. An assessment tool that enables management to monitor the progress and the effectiveness of lean practices implementation is designed and presented. Consequently, the results show that the average implementation level of lean practices in MET is 77%, Jordanian companies are implementing successfully the considered lean production practices, and the presented model has Cronbach-s alpha value of 0.87 which is good evidence on model consistency and results validation.

Keywords: Lean Production, SME applications, Visual Control, New equipment/technologies, Specific equipment configurations, Jordan

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2269
154 Central Finite Volume Methods Applied in Relativistic Magnetohydrodynamics: Applications in Disks and Jets

Authors: Raphael de Oliveira Garcia, Samuel Rocha de Oliveira

Abstract:

We have developed a new computer program in Fortran 90, in order to obtain numerical solutions of a system of Relativistic Magnetohydrodynamics partial differential equations with predetermined gravitation (GRMHD), capable of simulating the formation of relativistic jets from the accretion disk of matter up to his ejection. Initially we carried out a study on numerical methods of unidimensional Finite Volume, namely Lax-Friedrichs, Lax-Wendroff, Nessyahu-Tadmor method and Godunov methods dependent on Riemann problems, applied to equations Euler in order to verify their main features and make comparisons among those methods. It was then implemented the method of Finite Volume Centered of Nessyahu-Tadmor, a numerical schemes that has a formulation free and without dimensional separation of Riemann problem solvers, even in two or more spatial dimensions, at this point, already applied in equations GRMHD. Finally, the Nessyahu-Tadmor method was possible to obtain stable numerical solutions - without spurious oscillations or excessive dissipation - from the magnetized accretion disk process in rotation with respect to a central black hole (BH) Schwarzschild and immersed in a magnetosphere, for the ejection of matter in the form of jet over a distance of fourteen times the radius of the BH, a record in terms of astrophysical simulation of this kind. Also in our simulations, we managed to get substructures jets. A great advantage obtained was that, with the our code, we got simulate GRMHD equations in a simple personal computer.

Keywords: Finite Volume Methods, Central Schemes, Fortran 90, Relativistic Astrophysics, Jet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2283
153 Pesticides Use in Rural Settings in Romania

Authors: Anca E. Gurzau, Alexandru Coman, Eugen S. Gurzau, Marinela Penes, Daniela Dumitrescu, DorinMarchean, Ioan Chera

Abstract:

The environment pollution with pesticides and heavy metals is a recognized problem nowadays, with extension to the global scale the tendency of amplification. Even with all the progress in the environmental field, both in the emphasize of the effect of the pollutants upon health, the linked studies environment-health are insufficient, not only in Romania but all over the world also. We aim to describe the particular situation in Romania regarding the uncontrolled use of pesticides, to identify and evaluate the risk zones for health and the environment in Romania, with the final goal of designing adequate programs for reduction and control of the risk sources. An exploratory study was conducted to determine the magnitude of the pesticide use problem in a population living in Saliste, a rural setting in Transylvania, Romania. The significant stakeholders in Saliste region were interviewed and a sample from the population living in Saliste area was selected to fill in a designed questionnaire. All the selected participants declared that they used pesticides in their activities for more than one purpose. They declared they annually applied pesticides for a period of time between 11 and 30 years, from 5 to 9 days per year on average, mainly on crops situated at some distance from the houses but high risk behavior was identified as the volunteers declared the use of pesticides in the backyard gardens, near their homes, where children were playing. The pesticide applicators did not have the necessary knowledge about safety and exposure. The health data must be correlated with exposure biomarkers in attempt to identify the possible health effects of the pesticides exposure. Future plans include educational campaigns to raise the awareness of the population on the danger of uncontrolled use of pesticides.

Keywords: Pesticides, health effects, Romania, Saliste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
152 PoPCoRN: A Power-Aware Periodic Surveillance Scheme in Convex Region using Wireless Mobile Sensor Networks

Authors: A. K. Prajapati

Abstract:

In this paper, the periodic surveillance scheme has been proposed for any convex region using mobile wireless sensor nodes. A sensor network typically consists of fixed number of sensor nodes which report the measurements of sensed data such as temperature, pressure, humidity, etc., of its immediate proximity (the area within its sensing range). For the purpose of sensing an area of interest, there are adequate number of fixed sensor nodes required to cover the entire region of interest. It implies that the number of fixed sensor nodes required to cover a given area will depend on the sensing range of the sensor as well as deployment strategies employed. It is assumed that the sensors to be mobile within the region of surveillance, can be mounted on moving bodies like robots or vehicle. Therefore, in our scheme, the surveillance time period determines the number of sensor nodes required to be deployed in the region of interest. The proposed scheme comprises of three algorithms namely: Hexagonalization, Clustering, and Scheduling, The first algorithm partitions the coverage area into fixed sized hexagons that approximate the sensing range (cell) of individual sensor node. The clustering algorithm groups the cells into clusters, each of which will be covered by a single sensor node. The later determines a schedule for each sensor to serve its respective cluster. Each sensor node traverses all the cells belonging to the cluster assigned to it by oscillating between the first and the last cell for the duration of its life time. Simulation results show that our scheme provides full coverage within a given period of time using few sensors with minimum movement, less power consumption, and relatively less infrastructure cost.

Keywords: Sensor Network, Graph Theory, MSN, Communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1431
151 Progressive AAM Based Robust Face Alignment

Authors: Daehwan Kim, Jaemin Kim, Seongwon Cho, Yongsuk Jang, Sun-Tae Chung, Boo-Gyoun Kim

Abstract:

AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In case the initial values are a little far distant from the global optimum values, there exists a pretty good possibility that AAM-based face alignment may converge to a local minimum. In this paper, we propose a progressive AAM-based face alignment algorithm which first finds the feature parameter vector fitting the inner facial feature points of the face and later localize the feature points of the whole face using the first information. The proposed progressive AAM-based face alignment algorithm utilizes the fact that the feature points of the inner part of the face are less variant and less affected by the background surrounding the face than those of the outer part (like the chin contour). The proposed algorithm consists of two stages: modeling and relation derivation stage and fitting stage. Modeling and relation derivation stage first needs to construct two AAM models: the inner face AAM model and the whole face AAM model and then derive relation matrix between the inner face AAM parameter vector and the whole face AAM model parameter vector. In the fitting stage, the proposed algorithm aligns face progressively through two phases. In the first phase, the proposed algorithm will find the feature parameter vector fitting the inner facial AAM model into a new input face image, and then in the second phase it localizes the whole facial feature points of the new input face image based on the whole face AAM model using the initial parameter vector estimated from using the inner feature parameter vector obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment algorithm is more robust with respect to pose, illumination, and face background than the conventional basic AAM-based face alignment algorithm.

Keywords: Face Alignment, AAM, facial feature detection, model matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
150 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method

Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky

Abstract:

It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finiteelements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.

Keywords: Finite elements method, modeling, expected welding deformations, welding, assembling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
149 Data Hiding in Images in Discrete Wavelet Domain Using PMM

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Over last two decades, due to hostilities of environment over the internet the concerns about confidentiality of information have increased at phenomenal rate. Therefore to safeguard the information from attacks, number of data/information hiding methods have evolved mostly in spatial and transformation domain.In spatial domain data hiding techniques,the information is embedded directly on the image plane itself. In transform domain data hiding techniques the image is first changed from spatial domain to some other domain and then the secret information is embedded so that the secret information remains more secure from any attack. Information hiding algorithms in time domain or spatial domain have high capacity and relatively lower robustness. In contrast, the algorithms in transform domain, such as DCT, DWT have certain robustness against some multimedia processing.In this work the authors propose a novel steganographic method for hiding information in the transform domain of the gray scale image.The proposed approach works by converting the gray level image in transform domain using discrete integer wavelet technique through lifting scheme.This approach performs a 2-D lifting wavelet decomposition through Haar lifted wavelet of the cover image and computes the approximation coefficients matrix CA and detail coefficients matrices CH, CV, and CD.Next step is to apply the PMM technique in those coefficients to form the stego image. The aim of this paper is to propose a high-capacity image steganography technique that uses pixel mapping method in integer wavelet domain with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.

Keywords: Cover Image, Pixel Mapping Method (PMM), StegoImage, Integer Wavelet Tranform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2810