Search results for: Large amplitudes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2293

Search results for: Large amplitudes

463 A Hybrid Neural Network and Traditional Approach for Forecasting Lumpy Demand

Authors: A. Nasiri Pour, B. Rostami Tabar, A.Rahimzadeh

Abstract:

Accurate demand forecasting is one of the most key issues in inventory management of spare parts. The problem of modeling future consumption becomes especially difficult for lumpy patterns, which characterized by intervals in which there is no demand and, periods with actual demand occurrences with large variation in demand levels. However, many of the forecasting methods may perform poorly when demand for an item is lumpy. In this study based on the characteristic of lumpy demand patterns of spare parts a hybrid forecasting approach has been developed, which use a multi-layered perceptron neural network and a traditional recursive method for forecasting future demands. In the described approach the multi-layered perceptron are adapted to forecast occurrences of non-zero demands, and then a conventional recursive method is used to estimate the quantity of non-zero demands. In order to evaluate the performance of the proposed approach, their forecasts were compared to those obtained by using Syntetos & Boylan approximation, recently employed multi-layered perceptron neural network, generalized regression neural network and elman recurrent neural network in this area. The models were applied to forecast future demand of spare parts of Arak Petrochemical Company in Iran, using 30 types of real data sets. The results indicate that the forecasts obtained by using our proposed mode are superior to those obtained by using other methods.

Keywords: Lumpy Demand, Neural Network, Forecasting, Hybrid Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2680
462 Standalone Docking Station with Combined Charging Methods for Agricultural Mobile Robots

Authors: Leonor Varandas, Pedro D. Gaspar, Martim L. Aguiar

Abstract:

One of the biggest concerns in the field of agriculture is around the energy efficiency of robots that will perform agriculture’s activity and their charging methods. In this paper, two different charging methods for agricultural standalone docking stations are shown that will take into account various variants as field size and its irregularities, work’s nature to which the robot will perform, deadlines that have to be respected, among others. Its features also are dependent on the orchard, season, battery type and its technical specifications and cost. First charging base method focuses on wireless charging, presenting more benefits for small field. The second charging base method relies on battery replacement being more suitable for large fields, thus avoiding the robot stop for recharge. Existing many methods to charge a battery, the CC CV was considered the most appropriate for either simplicity or effectiveness. The choice of the battery for agricultural purposes is if most importance. While the most common battery used is Li-ion battery, this study also discusses the use of graphene-based new type of batteries with 45% over capacity to the Li-ion one. A Battery Management Systems (BMS) is applied for battery balancing. All these approaches combined showed to be a promising method to improve a lot of technical agricultural work, not just in terms of plantation and harvesting but also about every technique to prevent harmful events like plagues and weeds or even to reduce crop time and cost.

Keywords: Agricultural mobile robot, charging base methods, battery replacement method, wireless charging method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830
461 Solar and Wind Energy Potential Study of Lower Sindh, Pakistan for Power Generation

Authors: M. Akhlaque Ahmed, Sidra A. Shaikh, Maliha A. Siddiqui

Abstract:

Global and diffuse solar radiation on horizontal surface of Lower Sindh, namely Karachi, Hyderabad, Nawabshah were carried out using sunshine hour data of the area to assess the feasibility of solar energy utilization for power generation in Sindh province. The results obtained show a large variation in the direct and diffuse component of solar radiation in summer and winter months in Lower Sindh (50% direct and 50% diffuse for Karachi and Hyderabad). In Nawabshah area, the contribution of diffuse solar radiation is low during the monsoon months, July and August. The KT value of Nawabshah indicates a clear sky throughout almost the entire year. The percentage of diffuse radiation does not exceed more than 20%. In Nawabshah, the appearance of cloud is rare even during the monsoon months. The estimated values indicate that Nawabshah has high solar potential, whereas Karachi and Hyderabad have low solar potential. During the monsoon months the Lower part of Sindh can utilize the hybrid system with wind power. Near Karachi and Hyderabad, the wind speed ranges between 6.2 m/sec to 6.9 m/sec. A wind corridor exists near Karachi, Hyderabad, Gharo, Keti Bander and Shah Bander. The short fall of solar can be compensated by wind because in the monsoon months of July and August, wind speeds are higher in the Lower region of Sindh.

Keywords: Hybrid power system, power generation, solar and wind energy potential, Lower Sindh.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1801
460 Computational Design of Inhibitory Agents of BMP-Noggin Interaction to Promote Osteogenesis

Authors: Shaila Ahmed, Raghu Prasad Rao Metpally, Sreedhara Sangadala, Boojala Vijay B Reddy

Abstract:

Bone growth factors, such as Bone Morphogenic Protein-2 (BMP-2) have been approved by the FDA to replace grafting for some surgical interventions, but the high dose requirement limits its use in patients. Noggin, an extracellular protein, blocks the effect of BMP-2 by binding to BMP. Preventing the BMP-2/noggin interaction will help increase the free concentration of BMP-2 and therefore should enhance its efficacy to induce bone formation. The work presented here involves computational design of novel small molecule inhibitory agents of BMP-2/noggin interaction, based on our current understanding of BMP-2, and its known putative ligands (receptors and antagonists). A successful acquisition of such an inhibitory agent of BMP-2/noggin interaction would allow clinicians to reduce the dose required of BMP-2 protein in clinical applications to promote osteogenesis. The available crystal structures of the BMPs, its receptors, and the binding partner noggin were analyzed to identify the critical residues involved in their interaction. In presenting this study, LUDI de novo design method was utilized to perform virtual screening of a large number of compounds from a commercially available library against the binding sites of noggin to identify the lead chemical compounds that could potentially block BMP-noggin interaction with a high specificity.

Keywords: Transforming growth factor-beta, Bone morphogenic proteins, Noggin, LUDI de novo design method, CAP small molecules.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1920
459 BeamGA Median: A Hybrid Heuristic Search Approach

Authors: Ghada Badr, Manar Hosny, Nuha Bintayyash, Eman Albilali, Souad Larabi Marie-Sainte

Abstract:

The median problem is significantly applied to derive the most reasonable rearrangement phylogenetic tree for many species. More specifically, the problem is concerned with finding a permutation that minimizes the sum of distances between itself and a set of three signed permutations. Genomes with equal number of genes but different order can be represented as permutations. In this paper, an algorithm, namely BeamGA median, is proposed that combines a heuristic search approach (local beam) as an initialization step to generate a number of solutions, and then a Genetic Algorithm (GA) is applied in order to refine the solutions, aiming to achieve a better median with the smallest possible reversal distance from the three original permutations. In this approach, any genome rearrangement distance can be applied. In this paper, we use the reversal distance. To the best of our knowledge, the proposed approach was not applied before for solving the median problem. Our approach considers true biological evolution scenario by applying the concept of common intervals during the GA optimization process. This allows us to imitate a true biological behavior and enhance genetic approach time convergence. We were able to handle permutations with a large number of genes, within an acceptable time performance and with same or better accuracy as compared to existing algorithms.

Keywords: Median problem, phylogenetic tree, permutation, genetic algorithm, beam search, genome rearrangement distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 979
458 Standard Deviation of Mean and Variance of Rows and Columns of Images for CBIR

Authors: H. B. Kekre, Kavita Patil

Abstract:

This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.

Keywords: Standard deviation Image retrieval, color distribution, Variance, Variance of Variance, Euclidean distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3746
457 Closed Greenhouse Production Systems for Smart Plant Production in Urban Areas

Authors: U. Schmidt, D. Dannehl, I. Schuch, J. Suhl, T. Rocksch, R. Salazar-Moreno, E. Fitz-Rodrigues, A. Rojano Aquilar, I. Lopez Cruz, G. Navas Gomez, R. A. Abraham, L. C. Irineo, N. G. Gilberto

Abstract:

The integration of agricultural production systems into urban areas is a challenge for the coming decades. Because of increasing greenhouse gas emission and rising resource consumption as well as costs in animal husbandry, the dietary habits of people in the 21st century have to focus on herbal foods. Intensive plant cultivation systems in large cities and megacities require a smart coupling of information, material and energy flow with the urban infrastructure in terms of Horticulture 4.0. In recent years, many puzzle pieces have been developed for these closed processes at the Humboldt University. To compile these for an urban plant production, it has to be optimized and networked with urban infrastructure systems. In the field of heat energy production, it was shown that with closed greenhouse technology and patented heat exchange and storage technology energy can be provided for heating and domestic hot water supply in the city. Closed water circuits can be drastically reducing the water requirements of plant production in urban areas. Ion sensitive sensors and new disinfection methods can help keep circulating nutrient solutions in the system for a longer time in urban plant production greenhouses.

Keywords: Semi closed, greenhouses, urban farming, solar heat collector, closed water cycles, aquaponics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 771
456 Numerical Modelling of Dust Propagation in the Atmosphere of Tbilisi City in Case of Western Background Light Air

Authors: N. Gigauri, V. Kukhalashvili, A. Surmava, L. Intskirveli, L. Gverdtsiteli

Abstract:

Tbilisi, a large city of the South Caucasus, is a junction point connecting Asia and Europe, Russia and republics of the Asia Minor. Over the last years, its atmosphere has been experienced an increasing anthropogenic load. Numerical modeling method is used for study of Tbilisi atmospheric air pollution. By means of 3D non-linear non-steady numerical model a peculiarity of city atmosphere pollution is investigated during background western light air. Dust concentration spatial and time changes are determined. There are identified the zones of high, average and less pollution, dust accumulation areas, transfer directions etc. By numerical modeling, there is shown that the process of air pollution by the dust proceeds in four stages, and they depend on the intensity of motor traffic, the micro-relief of the city, and the location of city mains. In the interval of time 06:00-09:00 the intensive growth, 09:00-15:00 a constancy or weak decrease, 18:00-21:00 an increase, and from 21:00 to 06:00 a reduction of the dust concentrations take place. The highly polluted areas are located in the vicinity of the city center and at some peripherical territories of the city, where the maximum dust concentration at 9PM is equal to 2 maximum allowable concentrations. The similar investigations conducted in case of various meteorological situations will enable us to compile the map of background urban pollution and to elaborate practical measures for ambient air protection.

Keywords: Numerical modelling, source of pollution, dust propagation, western light air.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 489
455 Urban Greenery in the Greatest Polish Cities: Analysis of Spatial Concentration

Authors: Elżbieta Antczak

Abstract:

Cities offer important opportunities for economic development and for expanding access to basic services, including health care and education, for large numbers of people. Moreover, green areas (as an integral part of sustainable urban development) present a major opportunity for improving urban environments, quality of lives and livelihoods. This paper examines, using spatial concentration and spatial taxonomic measures, regional diversification of greenery in the cities of Poland. The analysis includes location quotients, Lorenz curve, Locational Gini Index, and the synthetic index of greenery and spatial statistics tools: (1) To verify the occurrence of strong concentration or dispersion of the phenomenon in time and space depending on the variable category, and, (2) To study if the level of greenery depends on the spatial autocorrelation. The data includes the greatest Polish cities, categories of the urban greenery (parks, lawns, street greenery, and green areas on housing estates, cemeteries, and forests) and the time span 2004-2015. According to the obtained estimations, most of cites in Poland are already taking measures to become greener. However, in the country there are still many barriers to well-balanced urban greenery development (e.g. uncontrolled urban sprawl, poor management as well as lack of spatial urban planning systems).

Keywords: Greenery, urban areas, regional spatial diversification and concentration, spatial taxonomic measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1252
454 Identifying Quality Islamic Content in Community Question Answering Sites

Authors: Rabia Bibi, Muhammad Shahzad Faisal, Khalid Iqbal, Atif Inayat

Abstract:

Internet is growing rapidly and new community-based content is added by people every second. With this fast-growing community-based content, if a user requires answers of particular questions, then reviews are required from experts or community. However, it is difficult to get quality answers. The Muslim community all over the world is seeking help to get their questions and issues discussed to get answers. Online web portals of religious schools and community-based question answering sites are two big platforms to solve the issues of users. In the case of religious schools, there are experts and qualified religious scholars (mufti) who can give the expert opinion. However, the quality of community-based content cannot be guaranteed as it may not be an answer that satisfies the question of a user. Users on CQA sites may include spammers or individual criticizing the questioner instead of providing useful answers. In this paper, we research strategies to naturally distinguish the right content. As an experiment, we concentrate on Yahoo! Answers, and Quora, popular online QA sites, where questions are asked, answered, edited, and organized by a large community of users. We present the classification of data to categorize both relevant and irrelevant answers. Specifically, we demonstrate that the proposed framework can isolate quality answers from the rest with an exactness near that of people.

Keywords: Community-based question and answering, evaluation and prediction of quality answer, answer classification, Islamic content, answer ranking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81
453 Quality Evaluation of Compressed MRI Medical Images for Telemedicine Applications

Authors: Seddeq E. Ghrare, Salahaddin M. Shreef

Abstract:

Medical image modalities such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US), X-ray are adapted to diagnose disease. These modalities provide flexible means of reviewing anatomical cross-sections and physiological state in different parts of the human body. The raw medical images have a huge file size and need large storage requirements. So it should be such a way to reduce the size of those image files to be valid for telemedicine applications. Thus the image compression is a key factor to reduce the bit rate for transmission or storage while maintaining an acceptable reproduction quality, but it is natural to rise the question of how much an image can be compressed and still preserve sufficient information for a given clinical application. Many techniques for achieving data compression have been introduced. In this study, three different MRI modalities which are Brain, Spine and Knee have been compressed and reconstructed using wavelet transform. Subjective and objective evaluation has been done to investigate the clinical information quality of the compressed images. For the objective evaluation, the results show that the PSNR which indicates the quality of the reconstructed image is ranging from (21.95 dB to 30.80 dB, 27.25 dB to 35.75 dB, and 26.93 dB to 34.93 dB) for Brain, Spine, and Knee respectively. For the subjective evaluation test, the results show that the compression ratio of 40:1 was acceptable for brain image, whereas for spine and knee images 50:1 was acceptable.

Keywords: Medical Image, Magnetic Resonance Imaging, Image Compression, Discrete Wavelet Transform, Telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2977
452 Finite Volume Method for Flow Prediction Using Unstructured Meshes

Authors: Juhee Lee, Yongjun Lee

Abstract:

In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.

Keywords: Finite volume method, fluid flow, laminar flow, unstructured grid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
451 Dynamic Features Selection for Heart Disease Classification

Authors: Walid MOUDANI

Abstract:

The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the Coronary Heart Disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts- knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.

Keywords: Multi-Classifier Decisions Tree, Features Reduction, Dynamic Programming, Rough Sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2533
450 Effect of Reynolds Number on Wall-normal Turbulence Intensity in a Smooth and Rough Open Channel Using both Outer and Inner Scaling

Authors: Md Abdullah Al Faruque, Ram Balachandar

Abstract:

Sudden change of bed condition is frequent in open channel flow. Change of bed condition affects the turbulence characteristics in both streamwise and wall-normal direction. Understanding the turbulence intensity in open channel flow is of vital importance to the modeling of sediment transport and resuspension, bed formation, entrainment, and the exchange of energy and momentum. A comprehensive study was carried out to understand the extent of the effect of Reynolds number and bed roughness on different turbulence characteristics in an open channel flow. Four different bed conditions (impervious smooth bed, impervious continuous rough bed, pervious rough sand bed, and impervious distributed roughness) and two different Reynolds numbers were adopted for this cause. The effect of bed roughness on different turbulence characteristics is seen to be prevalent for most of the flow depth. Effect of Reynolds number on different turbulence characteristics is also evident for flow over different bed, but the extent varies on bed condition. Although the same sand grain is used to create the different rough bed conditions, the difference in turbulence characteristics is an indication that specific geometry of the roughness has an influence on turbulence characteristics. Roughness increases the contribution of the extreme turbulent events which produces very large instantaneous Reynolds shear stress and can potentially influence the sediment transport, resuspension of pollutant from bed and alter the nutrient composition, which eventually affect the sustainability of benthic organisms.

Keywords: Open channel flow, Reynolds Number, roughness, turbulence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1081
449 Oracle JDE Enterprise One ERP Implementation: A Case Study

Authors: Abhimanyu Pati, Krishna Kumar Veluri

Abstract:

The paper intends to bring out a real life experience encountered during actual implementation of a large scale Tier-1 Enterprise Resource Planning (ERP) system in a multi-location, discrete manufacturing organization in India, involved in manufacturing of auto components and aggregates. The business complexities, prior to the implementation of ERP, include multi-product with hierarchical product structures, geographically distributed multiple plant locations with disparate business practices, lack of inter-plant broadband connectivity, existence of disparate legacy applications for different business functions, and non-standardized codifications of products, machines, employees, and accounts apart from others. On the other hand, the manufacturing environment consisted of processes like Assemble-to-Order (ATO), Make-to-Stock (MTS), and Engineer-to-Order (ETO) with a mix of discrete and process operations. The paper has highlighted various business plan areas and concerns, prior to the implementation, with specific focus on strategic issues and objectives. Subsequently, it has dealt with the complete process of ERP implementation, starting from strategic planning, project planning, resource mobilization, and finally, the program execution. The step-by-step process provides a very good learning opportunity about the implementation methodology. At the end, various organizational challenges and lessons emerged, which will act as guidelines and checklist for organizations to successfully align and implement ERP and achieve their business objectives.

Keywords: ERP, ATO, MTS, ETO, discrete manufacturing, strategic planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800
448 Effect of Cow bone and Groundnut Shell Reinforced in Epoxy Resin on the Mechanical Properties and Microstructure of the Composites

Authors: O. I. Rufai, G. I. Lawal, B. O. Bolasodun, S. I. Durowaye, J. O. Etoh

Abstract:

It is an established fact that polymers have several physical limitations such as low stiffness and low resistance to impact on loading. Hence, polymers do not usually have requisite mechanical strength for application in various fields. The reinforcement by high strength fibers provides the polymer substantially enhanced mechanical properties and makes them more suitable for a large number of diverse applications. This research evaluates the effects of particulate Cow bone and Groundnut shell additions on the mechanical properties and microstructure of cow bone and groundnut shell reinforced epoxy composite in order to assess the possibility of using it as a material for engineering applications. Cow bone and groundnut shell particles reinforced with epoxy (CBRPC and GSRPC) was prepared by varying the cow bone and groundnut shell particles from 0-25 wt% with 5 wt% intervals. A Hybrid of the Cow bone and Groundnut shell (HGSCB) reinforce with epoxy was also prepared. The mechanical properties of the developed composites were investigated. Optical microscopy was used to examine the microstructure of the composites. The results revealed that mechanical properties did not increase uniformly with additions in filler but exhibited maximum properties at specific percentages of filler additions. From the Microscopic evaluation, it was discovered that homogeneity decreases with increase in % filler, this could be due to poor interfacial bonding.

Keywords: Groundnut shell reinforced polymer composite (GSRPC), Cow bone reinforced polymer composite (CBRPC), Hybrid of ground nutshell and cowbone (HGSCB).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3062
447 Fluorescence Quenching as an Efficient Tool for Sensing Application: Study on the Fluorescence Quenching of Naphthalimide Dye by Graphene Oxide

Authors: Sanaz Seraj, Shohre Rouhani

Abstract:

Recently, graphene has gained much attention because of its unique optical, mechanical, electrical, and thermal properties. Graphene has been used as a key material in the technological applications in various areas such as sensors, drug delivery, super capacitors, transparent conductor, and solar cell. It has a superior quenching efficiency for various fluorophores. Based on these unique properties, the optical sensors with graphene materials as the energy acceptors have demonstrated great success in recent years. During quenching, the emission of a fluorophore is perturbed by a quencher which can be a substrate or biomolecule, and due to this phenomenon, fluorophore-quencher has been used for selective detection of target molecules. Among fluorescence dyes, 1,8-naphthalimide is well known for its typical intramolecular charge transfer (ICT) and photo-induced charge transfer (PET) fluorophore, strong absorption and emission in the visible region, high photo stability, and large Stokes shift. Derivatives of 1,8-naphthalimides have found applications in some areas, especially fluorescence sensors. Herein, the fluorescence quenching of graphene oxide has been carried out on a naphthalimide dye as a fluorescent probe model. The quenching ability of graphene oxide on naphthalimide dye was studied by UV-VIS and fluorescence spectroscopy. This study showed that graphene is an efficient quencher for fluorescent dyes. Therefore, it can be used as a suitable candidate sensing platform. To the best of our knowledge, studies on the quenching and absorption of naphthalimide dyes by graphene oxide are rare.

Keywords: Fluorescence, graphene oxide, naphthalimide dye, quenching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
446 Numerical Investigation of Non Fourier Heat Conduction in a Semi-infinite Body due to a Moving Concentrated Heat Source Composed with Radiational Boundary Condition

Authors: M. Akbari, S. Sadodin

Abstract:

In this paper, the melting of a semi-infinite body as a result of a moving laser beam has been studied. Because the Fourier heat transfer equation at short times and large dimensions does not have sufficient accuracy; a non-Fourier form of heat transfer equation has been used. Due to the fact that the beam is moving in x direction, the temperature distribution and the melting pool shape are not asymmetric. As a result, the problem is a transient threedimensional problem. Therefore, thermophysical properties such as heat conductivity coefficient, density and heat capacity are functions of temperature and material states. The enthalpy technique, used for the solution of phase change problems, has been used in an explicit finite volume form for the hyperbolic heat transfer equation. This technique has been used to calculate the transient temperature distribution in the semi-infinite body and the growth rate of the melt pool. In order to validate the numerical results, comparisons were made with experimental data. Finally, the results of this paper were compared with similar problem that has used the Fourier theory. The comparison shows the influence of infinite speed of heat propagation in Fourier theory on the temperature distribution and the melt pool size.

Keywords: Non-Fourier, Enthalpy technique, Melt pool, Radiational boundary condition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
445 Identifying Autism Spectrum Disorder Using Optimization-Based Clustering

Authors: Sharifah Mousli, Sona Taheri, Jiayuan He

Abstract:

Autism spectrum disorder (ASD) is a complex developmental condition involving persistent difficulties with social communication, restricted interests, and repetitive behavior. The challenges associated with ASD can interfere with an affected individual’s ability to function in social, academic, and employment settings. Although there is no effective medication known to treat ASD, to our best knowledge, early intervention can significantly improve an affected individual’s overall development. Hence, an accurate diagnosis of ASD at an early phase is essential. The use of machine learning approaches improves and speeds up the diagnosis of ASD. In this paper, we focus on the application of unsupervised clustering methods in ASD, as a large volume of ASD data generated through hospitals, therapy centers, and mobile applications has no pre-existing labels. We conduct a comparative analysis using seven clustering approaches, such as K-means, agglomerative hierarchical, model-based, fuzzy-C-means, affinity propagation, self organizing maps, linear vector quantisation – as well as the recently developed optimization-based clustering (COMSEP-Clust) approach. We evaluate the performances of the clustering methods extensively on real-world ASD datasets encompassing different age groups: toddlers, children, adolescents, and adults. Our experimental results suggest that the COMSEP-Clust approach outperforms the other seven methods in recognizing ASD with well-separated clusters.

Keywords: Autism spectrum disorder, clustering, optimization, unsupervised machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 419
444 A Review of the Antecedents and Consequences of Employee Engagementc

Authors: Ibrahim Hamidu Magem

Abstract:

Employee engagement has continued to gain popularity among practitioners, consultants and academicians recent years. This is due to the fact that the engaged employees are central to organizational success in today’s highly competitive and rapidly changing business environment. Employee engagement depicts a situation whereby employee’s harnessed themselves to their work roles. The importance of employee engagement to organizations cannot be overemphasized in today’s rapidly changing business environment. Organizations both large and small are constantly striving to improve their performance, retain employees, reduce absenteeism, and create loyal customers among others. To be able to achieve these organizations need a team of highly engaged employees. In line with this, the study attempts to provide a valuable framework for understanding the antecedents and consequences of employee engagement in organizations. The paper categorizes the antecedents of employee engagement into individual and organizational factors which it is assumed that the existence of such factors could result into engaged employees that will be of benefit to organizations. Therefore, it is recommended that organizations should revisit and redesign its employee engagement system to enable them attain their organizational goals and objectives. In addition, organizations should note that engagement is personal but organizational engagement programmes should be about everyone in the organization. The findings from this paper adds to existing studies about employee engagement and also provide awareness to academics and practitioners about the importance of employee engagement to improve organizations efficiency and effectiveness, as well as to impact to overall firm performance.

Keywords: Antecedent, employee engagement, job involvement, organization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
443 Using Manipulating Urban Layouts to Enhance Ventilation and Thermal Comfort in Street Canyons

Authors: Su Ying-Ming

Abstract:

High density of high rise buildings in urban areas lead to a deteriorative Urban Heat Island Effect, gradually. This study focuses on discussing the relationship between urban layout and ventilation comfort in street canyons. This study takes Songjiang Nanjing Rd. area of Taipei, Taiwan as an example to evaluate the wind environment comfort index by field measurement and Computational Fluid Dynamics (CFD) to improve both the quality and quantity of the environment. In this study, different factors including street blocks size, the width of buildings, street width ratio and the direction of the wind were used to discuss the potential of ventilation. The environmental wind field was measured by the environmental testing equipment, Testo 480. Evaluation of blocks sizes, the width of buildings, street width ratio and the direction of the wind was made under the condition of constant floor area with the help of Stimulation CFD to adjust research methods for optimizing regional wind environment. The results of this study showed the width of buildings influences the efficiency of outdoor ventilation; improvement of the efficiency of ventilation with large street width was also shown. The study found that Block width and H/D value and PR value has a close relationship. Furthermore, this study showed a significant relationship between the alteration of street block geometry and outdoor comfortableness.

Keywords: Urban ventilation path, ventilation efficiency indices, CFD, building layout.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1047
442 Empirical Roughness Progression Models of Heavy Duty Rural Pavements

Authors: Nahla H. Alaswadko, Rayya A. Hassan, Bayar N. Mohammed

Abstract:

Empirical deterministic models have been developed to predict roughness progression of heavy duty spray sealed pavements for a dataset representing rural arterial roads. The dataset provides a good representation of the relevant network and covers a wide range of operating and environmental conditions. A sample with a large size of historical time series data for many pavement sections has been collected and prepared for use in multilevel regression analysis. The modelling parameters include road roughness as performance parameter and traffic loading, time, initial pavement strength, reactivity level of subgrade soil, climate condition, and condition of drainage system as predictor parameters. The purpose of this paper is to report the approaches adopted for models development and validation. The study presents multilevel models that can account for the correlation among time series data of the same section and to capture the effect of unobserved variables. Study results show that the models fit the data very well. The contribution and significance of relevant influencing factors in predicting roughness progression are presented and explained. The paper concludes that the analysis approach used for developing the models confirmed their accuracy and reliability by well-fitting to the validation data.

Keywords: Roughness progression, empirical model, pavement performance, heavy duty pavement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 804
441 Modeling Spatial Distributions of Point and Nonpoint Source Pollution Loadings in the Great Lakes Watersheds

Authors: Chansheng He, Carlo DeMarchi

Abstract:

A physically based, spatially-distributed water quality model is being developed to simulate spatial and temporal distributions of material transport in the Great Lakes Watersheds of the U.S. Multiple databases of meteorology, land use, topography, hydrography, soils, agricultural statistics, and water quality were used to estimate nonpoint source loading potential in the study watersheds. Animal manure production was computed from tabulations of animals by zip code area for the census years of 1987, 1992, 1997, and 2002. Relative chemical loadings for agricultural land use were calculated from fertilizer and pesticide estimates by crop for the same periods. Comparison of these estimates to the monitored total phosphorous load indicates that both point and nonpoint sources are major contributors to the total nutrient loads in the study watersheds, with nonpoint sources being the largest contributor, particularly in the rural watersheds. These estimates are used as the input to the distributed water quality model for simulating pollutant transport through surface and subsurface processes to Great Lakes waters. Visualization and GIS interfaces are developed to visualize the spatial and temporal distribution of the pollutant transport in support of water management programs.

Keywords: Distributed Large Basin Runoff Model, Great LakesWatersheds, nonpoint source pollution, and point sources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
440 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS

Authors: E. Ramaraj, A. Padmapriya

Abstract:

In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.

Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988
439 Prediction of Road Accidents in Qatar by 2022

Authors: M. Abou-Amouna, A. Radwan, L. Al-kuwari, A. Hammuda, K. Al-Khalifa

Abstract:

There is growing concern over increasing incidences of road accidents and consequent loss of human life in Qatar. In light to the future planned event in Qatar, World Cup 2022; Qatar should put into consideration the future deaths caused by road accidents, and past trends should be considered to give a reasonable picture of what may happen in the future. Qatar roads should be arranged and paved in a way that accommodate high capacity of the population in that time, since then there will be a huge number of visitors from the world. Qatar should also consider the risk issues of road accidents raised in that period, and plan to maintain high level to safety strategies. According to the increase in the number of road accidents in Qatar from 1995 until 2012, an analysis of elements affecting and causing road accidents will be effectively studied. This paper aims to identify and criticize the factors that have high effect on causing road accidents in the state of Qatar, and predict the total number of road accidents in Qatar 2022. Alternative methods are discussed and the most applicable ones according to the previous researches are selected for further studies. The methods that satisfy the existing case in Qatar were the multiple linear regression model (MLR) and artificial neutral network (ANN). Those methods are analyzed and their findings are compared. We conclude that by using MLR the number of accidents in 2022 will become 355,226 accidents, and by using ANN 216,264 accidents. We conclude that MLR gave better results than ANN because the artificial neutral network doesn’t fit data with large range varieties.

Keywords: Road Safety, Prediction, Accident, Model, Qatar.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2632
438 A Rule-based Approach for Anomaly Detection in Subscriber Usage Pattern

Authors: Rupesh K. Gopal, Saroj K. Meher

Abstract:

In this report we present a rule-based approach to detect anomalous telephone calls. The method described here uses subscriber usage CDR (call detail record) data sampled over two observation periods: study period and test period. The study period contains call records of customers- non-anomalous behaviour. Customers are first grouped according to their similar usage behaviour (like, average number of local calls per week, etc). For customers in each group, we develop a probabilistic model to describe their usage. Next, we use maximum likelihood estimation (MLE) to estimate the parameters of the calling behaviour. Then we determine thresholds by calculating acceptable change within a group. MLE is used on the data in the test period to estimate the parameters of the calling behaviour. These parameters are compared against thresholds. Any deviation beyond the threshold is used to raise an alarm. This method has the advantage of identifying local anomalies as compared to techniques which identify global anomalies. The method is tested for 90 days of study data and 10 days of test data of telecom customers. For medium to large deviations in the data in test window, the method is able to identify 90% of anomalous usage with less than 1% false alarm rate.

Keywords: Subscription fraud, fraud detection, anomalydetection, maximum likelihood estimation, rule based systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2813
437 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise

Authors: J. P. Dubois, Omar M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.

Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813
436 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine

Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li

Abstract:

Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.

Keywords: Machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
435 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File

Authors: Mohammed Erritali

Abstract:

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.

Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
434 JEWEL: A Cosmological Model Due to the Geometrical Displacement of Galactic Object Like Black, White and Worm Holes

Authors: Francesco Pia

Abstract:

Stellar objects such as black, white and worm holes can be the subject of speculative reasoning if represented in a simplified and geometric form in order to be able to move them; and the cosmological model is one of the most important contents in relation to speculations that can then open the way to other aspects that are not strictly speculative but practical, precisely in the Universe represented by us. In this work, thanks to the hypothesis of a very large number of black, white and worm holes present in our Universe, we imagine that they can be moved; it was therefore thought to align them on a plane and following a redistribution, and the boundaries of this plane were ideally joined, giving rise to a sphere that has the stellar objects examined radially distributed. Thanks to geometrical displacements of these stellar objects that do not make each one of them lose their functionality in the region in which they are located, at the end of the speculative process it is possible to highlight a spherical layer that allows a flow from the outside and inside this spherical shell allowing to relate to other external and internal spherical layers; this aspect that seems useful to describe the universe we live in, for example inside one of the spherical shells just described. The name "Jewel" was chosen because, imagining the speculative process present in this work at the end of steps, the cosmological model tends to be "luminous". This cosmological model includes, for each internal part of a generic layer, different and numerous moments of our universe thanks to an eternal flow inward. There are many aspects to explore, one of these is the connection between the outermost and the inside of the spherical layers.

Keywords: Black hole, cosmological model, cosmology, white hole.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 571