Search results for: Standard
237 The Importance of Zakat in Struggle against Circle of Poverty and Income Redistribution
Authors: Hasan Bulent Kantarcı
Abstract:
This paper examines how “Zakat” provides fair income redistribution and aids the struggle against poverty. Providing fair income redistribution and combating poverty constitutes some of the fundamental tasks performed by countries all over the world. Each country seeks a solution for these problems according to their political, economic and administrative styles through applying various economic and financial policies. The same situation can be handled via “zakat” association in Islam. Nowadays, we observe different versions of “zakat” in developed countries. Applications such as negative income tax denote merely a different form of “zakat” that is being applied almost in the same way but under changed names. However, the minimum values to donate under zakat (e.g. 85 gr. gold and 40 animals) get altered and various amounts are put into practice. It might be named as negative income tax instead of zakat, nonetheless, these applications are based on the Holy Koran and the hadith released 1400 years ago. Besides, considering the savage and slavery in the world at those times, we might easily recognize the true value of the zakat being applied for the first time then in the Islamic system. Through zakat, governments are able to transfer incomes to the poor as a means of enabling them achieve the minimum standard of living required. With regards to who benefits from the Zakat, an objective and fair criteria was used to determine who benefits from the zakat contrary to the notion that it was based on peoples’ own choices. Since the zakat is obligatory, the transfers do not get forwarded directly but via the government and get distributed, which requires vast governmental organizations. Through the application of Zakat, reduced levels of poverty can be achieved and also ensure the fair income redistribution.
Keywords: Cycle of poverty, Islamic finance, income redistribution, zakat.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2307236 Implementation of Sprite Animation for Multimedia Application
Authors: Ms. Yi Mon Thant
Abstract:
Animation is simply defined as the sequencing of a series of static images to generate the illusion of movement. Most people believe that actual drawings or creation of the individual images is the animation, when in actuality it is the arrangement of those static images that conveys the motion. To become an animator, it is often assumed that needed the ability to quickly design masterpiece after masterpiece. Although some semblance of artistic skill is a necessity for the job, the real key to becoming a great animator is in the comprehension of timing. This paper will use a combination of sprite animation, frame animation, and some other techniques to cause a group of multi-colored static images to slither around in the bounded area. In addition to slithering, the images will also change the color of different parts of their body, much like the real world creatures that have this amazing ability to change the colors on their bodies do. This paper was implemented by using Java 2 Standard Edition (J2SE). It is both time-consuming and expensive to create animations, regardless if they are created by hand or by using motion-capture equipment. If the animators could reuse old animations and even blend different animations together, a lot of work would be saved in the process. The main objective of this paper is to examine a method for blending several animations together in real time. This paper presents and analyses a solution using Weighted Skeleton Animation (WSA) resulting in limited CPU time and memory waste as well as saving time for the animators. The idea presented is described in detail and implemented. In this paper, text animation, vertex animation, sprite part animation and whole sprite animation were tested. In this research paper, the resolution, smoothness and movement of animated images will be carried out from the parameters, which will be obtained from the experimental research of implementing this paper.Keywords: Weighted Skeleton Animation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832235 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation
Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar
Abstract:
The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.Keywords: Computational fluid dynamics, erosion, slurry transportation, k-ε Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1920234 Convection through Light Weight Timber Constructions with Mineral Wool
Authors: J. Schmidt, O. Kornadt
Abstract:
The major part of light weight timber constructions consists of insulation. Mineral wool is the most commonly used insulation due to its cost efficiency and easy handling. The fiber orientation and porosity of this insulation material enables flowthrough. The air flow resistance is low. If leakage occurs in the insulated bay section, the convective flow may cause energy losses and infiltration of the exterior wall with moisture and particles. In particular the infiltrated moisture may lead to thermal bridges and growth of health endangering mould and mildew. In order to prevent this problem, different numerical calculation models have been developed. All models developed so far have a potential for completion. The implementation of the flow-through properties of mineral wool insulation may help to improve the existing models. Assuming that the real pressure difference between interior and exterior surface is larger than the prescribed pressure difference in the standard test procedure for mineral wool ISO 9053 / EN 29053, measurements were performed using the measurement setup for research on convective moisture transfer “MSRCMT". These measurements show, that structural inhomogeneities of mineral wool effect the permeability only at higher pressure differences, as applied in MSRCMT. Additional microscopic investigations show, that the location of a leak within the construction has a crucial influence on the air flow-through and the infiltration rate. The results clearly indicate that the empirical values for the acoustic resistance of mineral wool should not be used for the calculation of convective transfer mechanisms.Keywords: convection, convective transfer, infiltration, mineralwool, permeability, resistance, leakage
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2142233 PM10 Chemical Characteristics in a Background Site at the Universidad Libre Bogotá
Authors: Laura X. Martinez, Andrés F. Rodríguez, Ruth A. Catacoli
Abstract:
One of the most important factors for air pollution is that the concentrations of PM10 maintain a constant trend, with the exception of some places where that frequently surpasses the allowed ranges established by Colombian legislation. The community that surrounds the Universidad Libre Bogotá is inhabited by a considerable number of students and workers, all of whom are possibly being exposed to PM10 for long periods of time while on campus. Thus, the chemical characterization of PM10 found in the ambient air at the Universidad Libre Bogotá was identified as a problem. A Hi-Vol sampler and EPA Test Method 5 were used to determine if the quality of air is adequate for the human respiratory system. Additionally, quartz fiber filters were utilized during sampling. Samples were taken three days a week during a dry period throughout the months of November and December 2015. The gravimetric analysis method was used to determine PM10 concentrations. The chemical characterization includes non-conventional carcinogenic pollutants. Atomic absorption spectrophotometry (AAS) was used for the determination of metals and VOCs were analyzed using the FTIR (Fourier transform infrared spectroscopy) method. In this way, concentrations of PM10, ranging from values of 13 µg/m3 to 66 µg/m3, were obtained; these values were below standard conditions. This evidence concludes that the PM10 concentrations during an exposure period of 24 hours are lower than the values established by Colombian law, Resolution 610 of 2010; however, when comparing these with the limits set by the World Health Organization (WHO), these concentrations could possibly exceed permissible levels.Keywords: Air quality, atomic absorption spectrophotometry, Fourier transform infrared spectroscopy, particulate matter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 914232 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach
Authors: Rajvir Kaur, Jeewani Anupama Ginige
Abstract:
With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.Keywords: Artificial neural networks, breast cancer, cancer dataset, classifiers, cervical cancer, F-score, logistic regression, machine learning, precision, recall, support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554231 New Echocardiographic Morphofunctional Diastolic Index (MFDI) in Differentiation of Normal Left Ventricular Filling from Pseudonormal and Restrictive
Authors: N. Nelasov, D. Safonov, M. Babaev, E. Mirzojan, O. Eroshenko, M. Morgunov, A. Erofeeva
Abstract:
We have shown previously that reflected high intensity motion signals (RIMS) can be used for detection of left ventricular (LV) diastolic dysfunction (DD). It is also well known, that left atrial (LA) dimension can be used as a marker of DD. In this study we decided to analyze the diagnostic role of new echocardiographic morphofunctional diastolic index (MFDI) in differentiation of normal filling of LV from pseudonormal and restrictive. MFDI includes LA dimension and velocity of early diastolic component ea of RIMS (MFDI = LA/ea).
343 healthy subjects and patients with various cardiac pathology underwent dopplerechocardiographic exam. According to the criteria of "Don" classification scheme 155 subjects had signs of normal LV filling (N) and 55 - of pseudonormal and restrictive filling (PN + R). LA dimension was performed in standard manner. RIMS were registered by conventional pulsed wave Doppler from apical 4-chamber view, when the sample volume was positioned between the tips of mitral leaflets. The velocity of early diastolic component of RIMS was measured. After calculation of MFDI mean values of this index in two groups (N and PN + R) were compared. The cutoff value of MFDI for differentiation of patients with N and PN + R was determined.
Mean value of MFDI in subjects with normal filling was 1.38+0.33 and in patients with pseudonormal and restrictive filling 2.43+0.43; p<0.0001. The cutoff value of MFDI > 2.0 separated subjects with normal LV filling from subjects with pseudonormal and restrictive filling with sensitivity 89.1% and specificity 97.4%.
Keywords: Dopplerechocardiography, diastolic dysfunction, left atrium, reflected high intensity motion signals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590230 Cumulative Learning based on Dynamic Clustering of Hierarchical Production Rules(HPRs)
Authors: Kamal K.Bharadwaj, Rekha Kandwal
Abstract:
An important structuring mechanism for knowledge bases is building clusters based on the content of their knowledge objects. The objects are clustered based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form Decision If < condition> Generality
Keywords: Cumulative learning, clustering, data mining, hierarchical production rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439229 Injection Molding of Inconel718 Parts for Aerospace Application Using Novel Binder System Based On Palm Oil Derivatives
Authors: R. Ibrahim, M. Azmirruddin, M. Jabir, N. Johari, M. Muhamad, A. R. A. Talib
Abstract:
Inconel718 has been widely used as a super alloy in aerospace application due to the high strength at elevated temperatures, satisfactory oxidation resistance and heat corrosion resistance. In this study, the Inconel718 has been fabricated using high technology of Metal Injection Molding (MIM) process due to the cost effective technique for producing small, complex and precision parts in high volume compared with conventional method through machining. Through MIM, the binder system is one of the most important criteria in order to successfully fabricate the Inconel718. Even though, the binder system is a temporary, but failure in the selection and removal of the binder system will affect on the final properties of the sintered parts. Therefore, the binder system based on palm oil derivative which is palm stearin has been formulated and developed to replace the conventional binder system. The rheological studies of the mixture between the powder and binders system have been determined properly in order to be successful during injection into injection molding machine. After molding, the binder holds the particles in place. The binder system has to be removed completely through debinding step. During debinding step, solvent debinding and thermal pyrolysis has been used to remove completely of the binder system. The debound part is then sintered to give the required physical and mechanical properties. The results show that the properties of the final sintered parts fulfill the Standard Metal Powder Industries Federation (MPIF) 35 for MIM parts.
Keywords: Binder system, rheological study, metal injection molding, debinding and sintered parts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2717228 Construction Unit Rate Factor Modelling Using Neural Networks
Authors: Balimu Mwiya, Mundia Muya, Chabota Kaliba, Peter Mukalula
Abstract:
Factors affecting construction unit cost vary depending on a country’s political, economic, social and technological inclinations. Factors affecting construction costs have been studied from various perspectives. Analysis of cost factors requires an appreciation of a country’s practices. Identified cost factors provide an indication of a country’s construction economic strata. The purpose of this paper is to identify the essential factors that affect unit cost estimation and their breakdown using artificial neural networks. Twenty five (25) identified cost factors in road construction were subjected to a questionnaire survey and employing SPSS factor analysis the factors were reduced to eight. The 8 factors were analysed using neural network (NN) to determine the proportionate breakdown of the cost factors in a given construction unit rate. NN predicted that political environment accounted 44% of the unit rate followed by contractor capacity at 22% and financial delays, project feasibility and overhead & profit each at 11%. Project location, material availability and corruption perception index had minimal impact on the unit cost from the training data provided. Quantified cost factors can be incorporated in unit cost estimation models (UCEM) to produce more accurate estimates. This can create improvements in the cost estimation of infrastructure projects and establish a benchmark standard to assist the process of alignment of work practises and training of new staff, permitting the on-going development of best practises in cost estimation to become more effective.
Keywords: Construction cost factors, neural networks, roadworks, Zambian Construction Industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3826227 The Effects of Shot and Grit Blasting Process Parameters on Steel Pipes Coating Adhesion
Authors: Saeed Khorasanizadeh
Abstract:
Adhesion strength of exterior or interior coating of steel pipes is too important. Increasing of coating adhesion on surfaces can increase the life time of coating, safety factor of transmitting line pipe and decreasing the rate of corrosion and costs. Preparation of steel pipe surfaces before doing the coating process is done by shot and grit blasting. This is a mechanical way to do it. Some effective parameters on that process, are particle size of abrasives, distance to surface, rate of abrasive flow, abrasive physical properties, shapes, selection of abrasive, kind of machine and its power, standard of surface cleanness degree, roughness, time of blasting and weather humidity. This search intended to find some better conditions which improve the surface preparation, adhesion strength and corrosion resistance of coating. So, this paper has studied the effect of varying abrasive flow rate, changing the abrasive particle size, time of surface blasting on steel surface roughness and over blasting on it by using the centrifugal blasting machine. After preparation of numbers of steel samples (according to API 5L X52) and applying epoxy powder coating on them, to compare strength adhesion of coating by Pull-Off test. The results have shown that, increasing the abrasive particles size and flow rate, can increase the steel surface roughness and coating adhesion strength but increasing the blasting time can do surface over blasting and increasing surface temperature and hardness too, change, decreasing steel surface roughness and coating adhesion strength.Keywords: surface preparation, abrasive particles, adhesionstrength
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9077226 Physicochemical Characterization of Medium Alkyd Resins Prepared with a Mixture of Linum usitatissimum L. and Plukenetia volubilis L. Oils
Authors: Antonella Hadzich, Santiago Flores
Abstract:
Alkyds have become essential raw materials in the coating and paint industry, due to their low cost, good application properties and lower environmental impact in comparison with petroleum-based polymers. The properties of these oil-modified materials depend on the type of polyunsaturated vegetable oil used for its manufacturing, since a higher degree of unsaturation provides a better crosslinking of the cured paint. Linum usitatissimum L. (flax) oil is widely used to develop alkyd resins due to its high degree of unsaturation. Although it is intended to find non-traditional sources and increase their commercial value, to authors’ best knowledge a natural source that can replace flaxseed oil has not yet been found. However, Plukenetia volubilis L. oil, of Peruvian origin, contains a similar fatty acid polyunsaturated content to the one reported for Linum usitatissimum L. oil. In this perspective, medium alkyd resins were prepared with a mixture of 50% of Linum usitatissimum L. oil and 50% of Plukenetia volubilis L. oil. Pure Linum usitatissimum L. oil was also used for comparison purposes. Three different resins were obtained by varying the amount of glycerol and pentaerythritol. The synthesized alkyd resins were characterized by FT-IR, and physicochemical properties like acid value, colour, viscosity, density and drying time were evaluated by standard methods. The pencil hardness and chemical resistance behaviour of the cured resins were also studied. Overall, it can be concluded that medium alkyd resins containing Plukenetia volubilis L. oil have an equivalent behaviour compared to those prepared purely with Linum usitatissimum L. oil. Both Plukenetia volubilis L. oil and pentaerythritol have a remarkable influence on certain physicochemical properties of medium alkyd resins.
Keywords: Alkyd resins, flaxseed oil, pentaerythritol, Plukenetia volubilis L. oil, protective coating.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 760225 Combined Sewer Overflow forecasting with Feed-forward Back-propagation Artificial Neural Network
Authors: Achela K. Fernando, Xiujuan Zhang, Peter F. Kinley
Abstract:
A feed-forward, back-propagation Artificial Neural Network (ANN) model has been used to forecast the occurrences of wastewater overflows in a combined sewerage reticulation system. This approach was tested to evaluate its applicability as a method alternative to the common practice of developing a complete conceptual, mathematical hydrological-hydraulic model for the sewerage system to enable such forecasts. The ANN approach obviates the need for a-priori understanding and representation of the underlying hydrological hydraulic phenomena in mathematical terms but enables learning the characteristics of a sewer overflow from the historical data. The performance of the standard feed-forward, back-propagation of error algorithm was enhanced by a modified data normalizing technique that enabled the ANN model to extrapolate into the territory that was unseen by the training data. The algorithm and the data normalizing method are presented along with the ANN model output results that indicate a good accuracy in the forecasted sewer overflow rates. However, it was revealed that the accurate forecasting of the overflow rates are heavily dependent on the availability of a real-time flow monitoring at the overflow structure to provide antecedent flow rate data. The ability of the ANN to forecast the overflow rates without the antecedent flow rates (as is the case with traditional conceptual reticulation models) was found to be quite poor.Keywords: Artificial Neural Networks, Back-propagationlearning, Combined sewer overflows, Forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533224 Household Indebtedness Risks in the Czech Republic
Authors: Jindřiška Šedová
Abstract:
In the past 20 years the economy of the Czech Republic has experienced substantial changes. In the 1990s the development was affected by the transformation which sought to establish the right conditions for privatization and creation of elementary market relations. In the last decade the characteristic elements such as private ownership and corresponding institutional framework have been strengthened. This development was marked by the accession of the Czech Republic to the EU. The Czech Republic is striving to reduce the difference between its level of economic development and the quality of institutional framework in comparison with other developed countries. The process of finding the adequate solutions has been hampered by the negative impact of the world financial crisis on the Czech Republic and the standard of living of its inhabitants. This contribution seeks to address the question of whether and to which extent the economic development of the transitive Czech economy is affected by the change in behaviour of households and their tendency to consumption, i.e. in the sense of reduction or increase in demand for goods and services. It aims to verify whether the increasing trend of household indebtedness and decreasing trend of saving pose a significant risk in the Czech Republic. At a general level the analysis aims to contribute to finding an answer to the question of whether the debt increase of Czech households is connected to the risk of "eating through" the borrowed money and whether Czech households risk falling into a debt trap. In addition to household indebtedness risks in the Czech Republic the analysis will focus on identification of specifics of the transformation phase of the Czech economy in comparison with the EU countries, or selected OECD countries.Keywords: household indebtedness, household consumption, credits, financial literacy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800223 The Evaluation of Gravity Anomalies Based on Global Models by Land Gravity Data
Authors: M. Yilmaz, I. Yilmaz, M. Uysal
Abstract:
The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.
Keywords: Free-air gravity anomaly, Bouguer gravity anomaly, global model, land gravity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 979222 Mobile Collaboration Learning Technique on Students in Developing Nations
Authors: Amah Nnachi Lofty, Oyefeso Olufemi, Ibiam Udu Ama
Abstract:
New and more powerful communications technologies continue to emerge at a rapid pace and their uses in education are widespread and the impact remarkable in the developing societies. This study investigates Mobile Collaboration Learning Technique (MCLT) on learners’ outcome among students in tertiary institutions of developing nations (a case of Nigeria students). It examines the significance of retention achievement scores of students taught using mobile collaboration and conventional method. The sample consisted of 120 students using Stratified random sampling method. Five research questions and hypotheses were formulated, and tested at 0.05 level of significance. A student achievement test (SAT) was made of 40 items of multiple-choice objective type, developed and validated for data collection by professionals. The SAT was administered to students as pre-test and post-test. The data were analyzed using t-test statistic to test the hypotheses. The result indicated that students taught using MCLT performed significantly better than their counterparts using the conventional method of instruction. Also, there was no significant difference in the post-test performance scores of male and female students taught using MCLT. Based on the findings, the following submissions was made that: Mobile collaboration system be encouraged in the institutions to boost knowledge sharing among learners, workshop and training should be organized to train teachers on the use of this technique, schools and government should consistently align curriculum standard to trends of technological dictates and formulate policies and procedures towards responsible use of MCLT.Keywords: Education, communication, learning, mobile collaboration, technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814221 Modeling, Simulation and Monitoring of Nuclear Reactor Using Directed Graph and Bond Graph
Authors: A. Badoud, M. Khemliche, S. Latreche
Abstract:
The main objective developed in this paper is to find a graphic technique for modeling, simulation and diagnosis of the industrial systems. This importance is much apparent when it is about a complex system such as the nuclear reactor with pressurized water of several form with various several non-linearity and time scales. In this case the analytical approach is heavy and does not give a fast idea on the evolution of the system. The tool Bond Graph enabled us to transform the analytical model into graphic model and the software of simulation SYMBOLS 2000 specific to the Bond Graphs made it possible to validate and have the results given by the technical specifications. We introduce the analysis of the problem involved in the faults localization and identification in the complex industrial processes. We propose a method of fault detection applied to the diagnosis and to determine the gravity of a detected fault. We show the possibilities of application of the new diagnosis approaches to the complex system control. The industrial systems became increasingly complex with the faults diagnosis procedures in the physical systems prove to become very complex as soon as the systems considered are not elementary any more. Indeed, in front of this complexity, we chose to make recourse to Fault Detection and Isolation method (FDI) by the analysis of the problem of its control and to conceive a reliable system of diagnosis making it possible to apprehend the complex dynamic systems spatially distributed applied to the standard pressurized water nuclear reactor.Keywords: Bond Graph, Modeling, Simulation, Monitoring, Analytical Redundancy Relations, Pressurized Water Reactor, Directed Graph.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978220 A Study on Fundamental Problems for Small and Medium Agricultural Machinery Industries in Central Region Area
Authors: P. Thepnarintra, S. Nikorn
Abstract:
Agricultural machinery industry plays an important role in the industrial development especially the production industry of the country. There has been continuing development responding to the higher demand of the production. However, the problem in agricultural machinery production still exists. Thus, the purpose of this research is to investigate problems on fundamental factors of industry based on the entrepreneurs’ point of view. The focus was on the small and medium size industry receiving factory license type number 0660 from the Department of Industrial Works. The investigation was on the comparison between the management of the small and medium size agricultural industry in 3 provinces in the central region of Thailand. Population in this study consisted of 189 company managers or managing directors, of which 101 were from the small size and 88 were from the medium size industry. The data were analyzed to find percentage, arithmetic mean, and standard deviation with independent sample T-test at the statistical significance .05. The results showed that the small and medium size agricultural machinery manufacturers in the central region of Thailand reported high problems in every aspect. When compared the problems on basic factors in running the business, it was found that there was no statistically difference at .05 in managing of the small and medium size agricultural machinery manufacturers. However, there was a statistically significant difference between the small and medium size agricultural machinery manufacturers on the aspect of policy and services of the government. The problems reported by the small and medium size agricultural machinery manufacturers were the services on public tap water and the problem on politic and stability of the country.Keywords: Agricultural machinery, manufacturers, problems, on running the business.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1202219 Customer Involvement in the Development of New Sustainable Products: A Review of the Literature
Authors: Natalia Moreira, Trevor Wood-Harper
Abstract:
The acceptance of sustainable products by the final consumer is still one of the challenges of the industry, which constantly seeks alternative approaches to successfully be accepted in the global market. A large set of methods and approaches have been discussed and analysed throughout the literature. Considering the current need for sustainable development and the current pace of consumption, the need for a combined solution towards the development of new products became clear, forcing researchers in product development to propose alternatives to the previous standard product development models. This paper presents, through a systemic analysis of the literature on product development, eco-design and consumer involvement, a set of alternatives regarding consumer involvement towards the development of sustainable products and how these approaches could help improve the sustainable industry’s establishment in the general market. Still being developed in the course of the author’s PhD, the initial findings of the research show that the understanding of the benefits of sustainable behaviour lead to a more conscious acquisition and eventually to the implementation of sustainable change in the consumer. Thus this paper is the initial approach towards the development of new sustainable products using the fashion industry as an example of practical implementation and acceptance by the consumers. By comparing the existing literature and critically analysing it, this paper concluded that the consumer involvement is strategic to improve the general understanding of sustainability and its features. The use of consumers and communities has been studied since the early 90s in order to exemplify uses and to guarantee a fast comprehension. The analysis done also includes the importance of this approach for the increase of innovation and ground breaking developments, thus requiring further research and practical implementation in order to better understand the implications and limitations of this methodology.Keywords: Consumer involvement, Products development, Sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529218 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 779217 Overloading Scheme for Cellular DS-CDMA using Quasi-Orthogonal Sequences and Iterative Interference Cancellation Receiver
Authors: Preetam Kumar, Saswat Chakrabarti
Abstract:
Overloading is a technique to accommodate more number of users than the spreading factor N. This is a bandwidth efficient scheme to increase the number users in a fixed bandwidth. One of the efficient schemes to overload a CDMA system is to use two sets of orthogonal signal waveforms (O/O). The first set is assigned to the N users and the second set is assigned to the additional M users. An iterative interference cancellation technique is used to cancel interference between the two sets of users. In this paper, the performance of an overloading scheme in which the first N users are assigned Walsh-Hadamard orthogonal codes and extra users are assigned the same WH codes but overlaid by a fixed (quasi) bent sequence [11] is evaluated. This particular scheme is called Quasi- Orthogonal Sequence (QOS) O/O scheme, which is a part of cdma2000 standard [12] to provide overloading in the downlink using single user detector. QOS scheme are balance O/O scheme, where the correlation between any set-1 and set-2 users are equalized. The allowable overload of this scheme is investigated in the uplink on an AWGN and Rayleigh fading channels, so that the uncoded performance with iterative multistage interference cancellation detector remains close to the single user bound. It is shown that this scheme provides 19% and 11% overloading with SDIC technique for N= 16 and 64 respectively, with an SNR degradation of less than 0.35 dB as compared to single user bound at a BER of 0.00001. But on a Rayleigh fading channel, the channel overloading is 45% (29 extra users) at a BER of 0.0005, with an SNR degradation of about 1 dB as compared to single user performance for N=64. This is a significant amount of channel overloading on a Rayleigh fading channel.Keywords: DS-CDMA, Iterative Interference CancellationOrthogonal codes, Overloading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716216 Investigation of Rehabilitation Effects on Fire Damaged High Strength Concrete Beams
Authors: Eun Mi Ryu, Ah Young An, Ji Yeon Kang, Yeong Soo Shin, Hee Sun Kim
Abstract:
When high strength reinforced concrete is exposed to high temperature due to a fire, deteriorations occur such as loss in strength and elastic modulus, cracking and spalling of the concrete. Therefore, it is important to understand risk of structural safety in building structures by studying structural behaviors and rehabilitation of fire damaged high strength concrete structures. This paper aims at investigating rehabilitation effect on fire damaged high strength concrete beams using experimental and analytical methods. In the experiments, flexural specimens with high strength concrete are exposed to high temperatures according to ISO 834 standard time temperature curve. From four-point loading test, results show that maximum loads of the rehabilitated beams are similar to or higher than those of the non-fire damaged RC beam. In addition, structural analyses are performed using ABAQUS 6.10-3 with same conditions as experiments to provide accurate predictions on structural and mechanical behaviors of rehabilitated RC beams. The parameters are the fire cover thickness and strengths of repairing mortar. Analytical results show good rehabilitation effects, when the results predicted from the rehabilitated models are compared to structural behaviors of the non-damaged RC beams. In this study, fire damaged high strength concrete beams are rehabilitated using polymeric cement mortar. The predictions from the finite element (FE) models show good agreements with the experimental results and the modeling approaches can be used to investigate applicability of various rehabilitation methods for further study.Keywords: Fire, High strength concrete, Rehabilitation, Reinforced concrete beam.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2378215 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding
Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf
Abstract:
In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795214 Arriving at an Optimum Value of Tolerance Factor for Compressing Medical Images
Authors: Sumathi Poobal, G. Ravindran
Abstract:
Medical imaging uses the advantage of digital technology in imaging and teleradiology. In teleradiology systems large amount of data is acquired, stored and transmitted. A major technology that may help to solve the problems associated with the massive data storage and data transfer capacity is data compression and decompression. There are many methods of image compression available. They are classified as lossless and lossy compression methods. In lossy compression method the decompressed image contains some distortion. Fractal image compression (FIC) is a lossy compression method. In fractal image compression an image is coded as a set of contractive transformations in a complete metric space. The set of contractive transformations is guaranteed to produce an approximation to the original image. In this paper FIC is achieved by PIFS using quadtree partitioning. PIFS is applied on different images like , Ultrasound, CT Scan, Angiogram, X-ray, Mammograms. In each modality approximately twenty images are considered and the average values of compression ratio and PSNR values are arrived. In this method of fractal encoding, the parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the other standard parameters constant. For all modalities of images the compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the decompressed image is arrived by PSNR values. From the results it is observed that the compression ratio increases with the tolerance factor and mammogram has the highest compression ratio. The quality of the image is not degraded upto an optimum value of tolerance factor, Tmax, equal to 8, because of the properties of fractal compression.Keywords: Fractal image compression, IFS, PIFS, PSNR, Quadtree partitioning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740213 Use of Social Media in PR: A Change of Trend
Authors: Tang Mui Joo, Chan Eang Teng
Abstract:
The use of social media has become more defined. It has been widely used for the purpose of business. More marketers are now using social media as tools to enhance their businesses. Whereas on the other hand, there are more and more people spending their time through mobile apps to be engaged in the social media sites like YouTube, Facebook, Twitter and others. Social media has even become common in Public Relations (PR). It has become number one platform for creating and sharing content. In view to this, social media has changed the rules in PR where it brings new challenges and opportunities to the profession. Although corporate websites, chat-rooms, email customer response facilities and electronic news release distribution are now viewed as standard aspects of PR practice, many PR practitioners are still struggling with the impact of new media though the implementation of social media is potentially reducing the cost of communication. It is to the point that PR practitioners are not fully embracing new media, they are ill-equipped to do so and they have a fear of the technology. Somehow that social media has become a new style of communication that is characterized by conversation and community. It has become a platform that allows individuals to interact with one another and build relationship among each other. Therefore, in the use of business world, consumers are able to interact with those companies that have joined any social media. Based on their experiences with social networking site interactions, they are also exposed to personal interaction while communicating. This paper is to study the impact of social media to PR. This paper discovers the potential changes of PR practices in a developing country like Malaysia. Eventually the study reflects on how PR practitioners are actually using social media in the country. This paper is based on two theories in its development of this research foundation. Media Ecology Theory is to support the impact and changes to PR. Social Penetration Theory is to reflect on how the use of social media is among PRs. This research is using survey with PR practitioners in its data collection. The results have shown that PR professionals value social media more than they actually use it and the way of organizations communicate had been changed due to the transformation of social media.Keywords: New media, social media, PR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6086212 A Microcontroller Implementation of Model Predictive Control
Authors: Amira Abbes Kheriji, Faouzi Bouani, Mekki Ksouri, Mohamed Ben Ahmed
Abstract:
Model Predictive Control (MPC) is increasingly being proposed for real time applications and embedded systems. However comparing to PID controller, the implementation of the MPC in miniaturized devices like Field Programmable Gate Arrays (FPGA) and microcontrollers has historically been very small scale due to its complexity in implementation and its computation time requirement. At the same time, such embedded technologies have become an enabler for future manufacturing enterprises as well as a transformer of organizations and markets. Recently, advances in microelectronics and software allow such technique to be implemented in embedded systems. In this work, we take advantage of these recent advances in this area in the deployment of one of the most studied and applied control technique in the industrial engineering. In fact in this paper, we propose an efficient framework for implementation of Generalized Predictive Control (GPC) in the performed STM32 microcontroller. The STM32 keil starter kit based on a JTAG interface and the STM32 board was used to implement the proposed GPC firmware. Besides the GPC, the PID anti windup algorithm was also implemented using Keil development tools designed for ARM processor-based microcontroller devices and working with C/Cµ langage. A performances comparison study was done between both firmwares. This performances study show good execution speed and low computational burden. These results encourage to develop simple predictive algorithms to be programmed in industrial standard hardware. The main features of the proposed framework are illustrated through two examples and compared with the anti windup PID controller.Keywords: Embedded systems, Model Predictive Control, microcontroller, Keil tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5504211 Investigating the Viability of Small-Scale Rapid Alloy Prototyping of Interstitial Free Steels
Authors: Talal S. Abdullah, Shahin Mehraban, Geraint Lodwig, Nicholas P. Lavery
Abstract:
The defining property of Interstitial Free (IF) steels is formability, comprehensively measured using the Lankford coefficient (r-value) on uniaxial tensile test data. The contributing factors supporting this feature are grain size, orientation, and elemental additions. The processes that effectively modulate these factors are the casting procedure, hot rolling, and heat treatment. An existing methodology is well-practised in the steel industry; however, large-scale production and experimentation consume significant proportions of time, money, and material. Introducing small-scale rapid alloy prototyping (RAP) as an alternative process would considerably reduce the drawbacks relative to standard practices. The aim is to finetune the existing fundamental procedures implemented in the industrial plant to adapt to the RAP route. IF material is remelted in the 80-gram coil induction melting (CIM) glovebox. To birth small grains, maximum deformation must be induced onto the cast material during the hot rolling process. The rolled strip must then satisfy the polycrystalline behaviour of the bulk material by displaying a resemblance in microstructure, hardness, and formability to that of the literature and actual plant steel. A successful outcome of this work is that small-scale RAP can achieve target compositions with similar microstructures and statistically consistent mechanical properties which complements and accelerates the development of novel steel grades.
Keywords: Interstitial free, miniaturized tensile specimen, plastic anisotropy, rapid alloy prototyping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 146210 Study on the Addition of Solar Generating and Energy Storage Units to a Power Distribution System
Authors: T. Costa, D. Narvaez, K. Melo, M. Villalva
Abstract:
Installation of micro-generators based on renewable energy in power distribution system has increased in recent years, with the main renewable sources being solar and wind. Due to the intermittent nature of renewable energy sources, such micro-generators produce time-varying energy which does not correspond at certain times of the day to the peak energy consumption of end users. For this reason, the use of energy storage units next to the grid contributes to the proper leveling of the buses’ voltage level according to Brazilian energy quality standards. In this work, the effect of the addition of a photovoltaic solar generator and a store of energy in the busbar voltages of an electric system is analyzed. The consumption profile is defined as the average hourly use of appliances in a common residence, and the generation profile is defined as a function of the solar irradiation available in a locality. The power summation method is validated with analytical calculation and is used to calculate the modules and angles of the voltages in the buses of an electrical system based on the IEEE standard, at each hour of the day and with defined load and generation profiles. The results show that bus 5 presents the worst voltage level at the power consumption peaks and stabilizes at the appropriate range with the inclusion of the energy storage during the night time period. Solar generator maintains improvement of the voltage level during the period when it receives solar irradiation, having peaks of production during the 12 pm (without exceeding the appropriate maximum levels of tension).
Keywords: Energy storage, power distribution system, solar generator, voltage level.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830209 Rayleigh-Bénard-Taylor Convection of Newtonian Nanoliquid
Authors: P. G. Siddheshwar, T. N. Sakshath
Abstract:
In the paper we make linear and non-linear stability analyses of Rayleigh-Bénard convection of a Newtonian nanoliquid in a rotating medium (called as Rayleigh-Bénard-Taylor convection). Rigid-rigid isothermal boundaries are considered for investigation. Khanafer-Vafai-Lightstone single phase model is used for studying instabilities in nanoliquids. Various thermophysical properties of nanoliquid are obtained using phenomenological laws and mixture theory. The eigen boundary value problem is solved for the Rayleigh number using an analytical method by considering trigonometric eigen functions. We observe that the critical nanoliquid Rayleigh number is less than that of the base liquid. Thus the onset of convection is advanced due to the addition of nanoparticles. So, increase in volume fraction leads to advanced onset and thereby increase in heat transport. The amplitudes of convective modes required for estimating the heat transport are determined analytically. The tri-modal standard Lorenz model is derived for the steady state assuming small scale convective motions. The effect of rotation on the onset of convection and on heat transport is investigated and depicted graphically. It is observed that the onset of convection is delayed due to rotation and hence leads to decrease in heat transport. Hence, rotation has a stabilizing effect on the system. This is due to the fact that the energy of the system is used to create the component V. We observe that the amount of heat transport is less in the case of rigid-rigid isothermal boundaries compared to free-free isothermal boundaries.Keywords: Nanoliquid, rigid-rigid, rotation, single-phase.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1008208 Effect of L-Dopa on Performance and Carcass Characteristics in Broiler Chickens
Authors: B. R. O. Omidiwura, A. F. Agboola, E. A. Iyayi
Abstract:
Pure form of L-Dopa is used to enhance muscular development, fat breakdown and suppress Parkinson disease in humans. However, the L-Dopa in mucuna seed, when present with other antinutritional factors, causes nutritional disorders in monogastric animals. Information on the utilisation of pure L-Dopa in monogastric animals is scanty. Therefore, effect of L-Dopa on growth performance and carcass characteristics in broiler chickens was investigated. Two hundred and forty one-day-old chicks were allotted to six treatments, which consisted of a positive control (PC) with standard energy (3100Kcal/Kg) and negative control (NC) with high energy (3500Kcal/Kg). The rest 4 diets were NC+0.1, NC+0.2, NC+0.3 and NC+0.4% L-Dopa, respectively. All treatments had 4 replicates in a completely randomized design. Body weight gain, final weight, feed intake, dressed weight and carcass characteristics were determined. Body weight gain and final weight of birds fed PC were 1791.0 and 1830.0g, NC+0.1% L-Dopa were 1827.7 and 1866.7g and NC+0.2% L-Dopa were 1871.9 and 1910.9g respectively, and the feed intake of PC (3231.5g), were better than other treatments. The dressed weight at 1375.0g and 1357.1g of birds fed NC+0.1% and NC+0.2% L-Dopa, respectively, were similar but better than other treatments. Also, the thigh (202.5g and 194.9g) and the breast meat (413.8g and 410.8g) of birds fed NC+0.1% and NC+0.2% L-Dopa, respectively, were similar but better than birds fed other treatments. The drum stick of birds fed NC+0.1% L-Dopa (220.5g) was observed to be better than birds on other diets. Meat to bone ratio and relative organ weights were not affected across treatments. L-Dopa extract, at levels tested, had no detrimental effect on broilers, rather better bird performance and carcass characteristics were observed especially at 0.1% and 0.2% L-Dopa inclusion rates. Therefore, 0.2% inclusion is recommended in diets of broiler chickens for improved performance and carcass characteristics.Keywords: Broilers, Carcass characteristics, L-Dopa, performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1452