Search results for: poisson random measures
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1525

Search results for: poisson random measures

1165 Effect of Neighborhood Size on Negative Weights in Punctual Kriging Based Image Restoration

Authors: Asmatullah Chaudhry, Anwar M. Mirza

Abstract:

We present a general comparison of punctual kriging based image restoration for different neighbourhood sizes. The formulation of the technique under consideration is based on punctual kriging and fuzzy concepts for image restoration in spatial domain. Three different neighbourhood windows are considered to estimate the semivariance at different lags for studying its effect in reduction of negative weights resulted in punctual kriging, consequently restoration of degraded images. Our results show that effect of neighbourhood size higher than 5x5 on reduction in negative weights is insignificant. In addition, image quality measures, such as structure similarity indices, peak signal to noise ratios and the new variogram based quality measures; show that 3x3 window size gives better performance as compared with larger window sizes.

Keywords: Image restoration, punctual kriging, semi-variance, structure similarity index, negative weights in punctual kriging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2327
1164 DQ Analysis of 3D Natural Convection in an Inclined Cavity Using an Velocity-Vorticity Formulation

Authors: D. C. Lo, S. S. Leu

Abstract:

In this paper, the differential quadrature method is applied to simulate natural convection in an inclined cubic cavity using velocity-vorticity formulation. The numerical capability of the present algorithm is demonstrated by application to natural convection in an inclined cubic cavity. The velocity Poisson equations, the vorticity transport equations and the energy equation are all solved as a coupled system of equations for the seven field variables consisting of three velocities, three vorticities and temperature. The coupled equations are simultaneously solved by imposing the vorticity definition at boundary without requiring the explicit specification of the vorticity boundary conditions. Test results obtained for an inclined cubic cavity with different angle of inclinations for Rayleigh number equal to 103, 104, 105 and 106 indicate that the present coupled solution algorithm could predict the benchmark results for temperature and flow fields. Thus, it is convinced that the present formulation is capable of solving coupled Navier-Stokes equations effectively and accurately.

Keywords: Natural convection, velocity-vorticity formulation, differential quadrature (DQ).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547
1163 A Simulation Study of E-Glass Reinforced Polyurethane Footbed and Investigation of Parameters Effecting Elastic Behaviour of Footbed Material

Authors: Berkay Ergene, Çağın Bolat

Abstract:

In this study, we mainly focused on a simulation study regarding composite footbed in order to contribute to shoe industry. As a footbed, e-glass fiber reinforced polyurethane was determined since polyurethane based materials are already used for footbed in shoe manufacturing frequently. Flat, elliptical and rectangular grooved shoe soles were modeled and analyzed separately as TPU, 10% glass fiber reinforced, 30% glass fiber reinforced and 50% glass fiber reinforced materials according to their properties under three point bending and compression situations to determine the relationship between model, material type and mechanical behaviours of composite model. ANSYS 14.0 APDL mechanical structural module is utilized in all simulations and analyzed stress and strain distributions for different footbed models and materials. Furthermore, materials constants like young modulus, shear modulus, Poisson ratio and density of the composites were calculated theoretically by using composite mixture rule and interpreted for mechanical aspects.

Keywords: Composite, elastic behaviour, footbed, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 715
1162 Two-dimensional Analytical Drain Current Model for Multilayered-Gate Material Engineered Trapezoidal Recessed Channel(MLGME-TRC) MOSFET: a Novel Design

Authors: Priyanka Malik A, Rishu Chaujar B, Mridula Gupta C, R.S. Gupta D

Abstract:

In this paper, for the first time, a two-dimensional (2D) analytical drain current model for sub-100 nm multi-layered gate material engineered trapezoidal recessed channel (MLGMETRC) MOSFET: a novel design is presented and investigated using ATLAS and DEVEDIT device simulators, to mitigate the large gate leakages and increased standby power consumption that arise due to continued scaling of SiO2-based gate dielectrics. The twodimensional (2D) analytical model based on solution of Poisson-s equation in cylindrical coordinates, utilizing the cylindrical approximation, has been developed which evaluate the surface potential, electric field, drain current, switching metric: ION/IOFF ratio and transconductance for the proposed design. A good agreement between the model predictions and device simulation results is obtained, verifying the accuracy of the proposed analytical model.

Keywords: ATLAS, DEVEDIT, NJD, MLGME- TRCMOSFET.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660
1161 A Multiclass BCMP Queueing Modeling and Simulation-Based Road Traffic Flow Analysis

Authors: Jouhra Dad, Mohammed Ouali, Yahia Lebbah

Abstract:

Urban road network traffic has become one of the most studied research topics in the last decades. This is mainly due to the enlargement of the cities and the growing number of motor vehicles traveling in this road network. One of the most sensitive problems is to verify if the network is congestion-free. Another related problem is the automatic reconfiguration of the network without building new roads to alleviate congestions. These problems require an accurate model of the traffic to determine the steady state of the system. An alternative is to simulate the traffic to see if there are congestions and when and where they occur. One key issue is to find an adequate model for road intersections. Once the model established, either a large scale model is built or the intersection is represented by its performance measures and simulation for analysis. In both cases, it is important to seek the queueing model to represent the road intersection. In this paper, we propose to model the road intersection as a BCMP queueing network and we compare this analytical model against a simulation model for validation.

Keywords: Queueing theory, transportation systems, BCMPqueueing network, performance measures, modeling, simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2411
1160 Semi-Automatic Method to Assist Expert for Association Rules Validation

Authors: Amdouni Hamida, Gammoudi Mohamed Mohsen

Abstract:

In order to help the expert to validate association rules extracted from data, some quality measures are proposed in the literature. We distinguish two categories: objective and subjective measures. The first one depends on a fixed threshold and on data quality from which the rules are extracted. The second one consists on providing to the expert some tools in the objective to explore and visualize rules during the evaluation step. However, the number of extracted rules to validate remains high. Thus, the manually mining rules task is very hard. To solve this problem, we propose, in this paper, a semi-automatic method to assist the expert during the association rule's validation. Our method uses rule-based classification as follow: (i) We transform association rules into classification rules (classifiers), (ii) We use the generated classifiers for data classification. (iii) We visualize association rules with their quality classification to give an idea to the expert and to assist him during validation process.

Keywords: Association rules, Rule-based classification, Classification quality, Validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
1159 Selection of Intensity Measure in Probabilistic Seismic Risk Assessment of a Turkish Railway Bridge

Authors: M. F. Yilmaz, B. Ö. Çağlayan

Abstract:

Fragility curve is an effective common used tool to determine the earthquake performance of structural and nonstructural components. Also, it is used to determine the nonlinear behavior of bridges. There are many historical bridges in the Turkish railway network; the earthquake performances of these bridges are needed to be investigated. To derive fragility curve Intensity measures (IMs) and Engineering demand parameters (EDP) are needed to be determined. And the relation between IMs and EDP are needed to be derived. In this study, a typical simply supported steel girder riveted railway bridge is studied. Fragility curves of this bridge are derived by two parameters lognormal distribution. Time history analyses are done for selected 60 real earthquake data to determine the relation between IMs and EDP. Moreover, efficiency, practicality, and sufficiency of three different IMs are discussed. PGA, Sa(0.2s) and Sa(1s), the most common used IMs parameters for fragility curve in the literature, are taken into consideration in terms of efficiency, practicality and sufficiency.

Keywords: Railway bridges, earthquake performance, fragility analyses, selection of intensity measures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 856
1158 Spreading Dynamics of a Viral Infection in a Complex Network

Authors: Khemanand Moheeput, Smita S. D. Goorah, Satish K. Ramchurn

Abstract:

We report a computational study of the spreading dynamics of a viral infection in a complex (scale-free) network. The final epidemic size distribution (FESD) was found to be unimodal or bimodal depending on the value of the basic reproductive number R0 . The FESDs occurred on time-scales long enough for intermediate-time epidemic size distributions (IESDs) to be important for control measures. The usefulness of R0 for deciding on the timeliness and intensity of control measures was found to be limited by the multimodal nature of the IESDs and by its inability to inform on the speed at which the infection spreads through the population. A reduction of the transmission probability at the hubs of the scale-free network decreased the occurrence of the larger-sized epidemic events of the multimodal distributions. For effective epidemic control, an early reduction in transmission at the index cell and its neighbors was essential.

Keywords: Basic reproductive number, epidemic control, scalefree network, viral infection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687
1157 Using Axiomatic Design for Developing a Framework of Manufacturing Cloud Service Composition in the Equilibrium State

Authors: Ehsan Vaziri Goodarzi, Mahmood Houshmand, Omid Fatahi Valilai, Vahidreza Ghezavati, Shahrooz Bamdad

Abstract:

One important paradigm of industry 4.0 is Cloud Manufacturing (CM). In CM everything is considered as a service, therefore, the CM platform should consider all service provider's capabilities and tries to integrate services in an equilibrium state. This research develops a framework for implementing manufacturing cloud service composition in the equilibrium state. The developed framework using well-known tools called axiomatic design (AD) and game theory. The research has investigated the factors for forming equilibrium for measures of the manufacturing cloud service composition. Functional requirements (FRs) represent the measures of manufacturing cloud service composition in the equilibrium state. These FRs satisfied by related Design Parameters (DPs). The FRs and DPs are defined by considering the game theory, QoS, consumer needs, parallel and cooperative services. Ultimately, four FRs and DPs represent the framework. To insure the validity of the framework, the authors have used the first AD’s independent axiom.

Keywords: Axiomatic design, manufacturing cloud service composition, cloud manufacturing, Industry 4.0.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 655
1156 Mining Image Features in an Automatic Two-Dimensional Shape Recognition System

Authors: R. A. Salam, M.A. Rodrigues

Abstract:

The number of features required to represent an image can be very huge. Using all available features to recognize objects can suffer from curse dimensionality. Feature selection and extraction is the pre-processing step of image mining. Main issues in analyzing images is the effective identification of features and another one is extracting them. The mining problem that has been focused is the grouping of features for different shapes. Experiments have been conducted by using shape outline as the features. Shape outline readings are put through normalization and dimensionality reduction process using an eigenvector based method to produce a new set of readings. After this pre-processing step data will be grouped through their shapes. Through statistical analysis, these readings together with peak measures a robust classification and recognition process is achieved. Tests showed that the suggested methods are able to automatically recognize objects through their shapes. Finally, experiments also demonstrate the system invariance to rotation, translation, scale, reflection and to a small degree of distortion.

Keywords: Image mining, feature selection, shape recognition, peak measures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419
1155 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective

Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou

Abstract:

The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.

Keywords: Mortality map, spatial patterns, statistical area, variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 963
1154 Land Use/Land Cover Mapping Using Landsat 8 and Sentinel-2 in a Mediterranean Landscape

Authors: M. Vogiatzis, K. Perakis

Abstract:

Spatial-explicit and up-to-date land use/land cover information is fundamental for spatial planning, land management, sustainable development, and sound decision-making. In the last decade, many satellite-derived land cover products at different spatial, spectral, and temporal resolutions have been developed, such as the European Copernicus Land Cover product. However, more efficient and detailed information for land use/land cover is required at the regional or local scale. A typical Mediterranean basin with a complex landscape comprised of various forest types, crops, artificial surfaces, and wetlands was selected to test and develop our approach. In this study, we investigate the improvement of Copernicus Land Cover product (CLC2018) using Landsat 8 and Sentinel-2 pixel-based classification based on all available existing geospatial data (Forest Maps, LPIS, Natura2000 habitats, cadastral parcels, etc.). We examined and compared the performance of the Random Forest classifier for land use/land cover mapping. In total, 10 land use/land cover categories were recognized in Landsat 8 and 11 in Sentinel-2A. A comparison of the overall classification accuracies for 2018 shows that Landsat 8 classification accuracy was slightly higher than Sentinel-2A (82,99% vs. 80,30%). We concluded that the main land use/land cover types of CLC2018, even within a heterogeneous area, can be successfully mapped and updated according to CLC nomenclature. Future research should be oriented toward integrating spatiotemporal information from seasonal bands and spectral indexes in the classification process.

Keywords: land use/land cover, random forest, Landsat-8 OLI, Sentinel-2A MSI, Corine land cover

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 263
1153 Numerical Applications of Tikhonov Regularization for the Fourier Multiplier Operators

Authors: Fethi Soltani, Adel Almarashi, Idir Mechai

Abstract:

Tikhonov regularization and reproducing kernels are the most popular approaches to solve ill-posed problems in computational mathematics and applications. And the Fourier multiplier operators are an essential tool to extend some known linear transforms in Euclidean Fourier analysis, as: Weierstrass transform, Poisson integral, Hilbert transform, Riesz transforms, Bochner-Riesz mean operators, partial Fourier integral, Riesz potential, Bessel potential, etc. Using the theory of reproducing kernels, we construct a simple and efficient representations for some class of Fourier multiplier operators Tm on the Paley-Wiener space Hh. In addition, we give an error estimate formula for the approximation and obtain some convergence results as the parameters and the independent variables approaches zero. Furthermore, using numerical quadrature integration rules to compute single and multiple integrals, we give numerical examples and we write explicitly the extremal function and the corresponding Fourier multiplier operators.

Keywords: Fourier multiplier operators, Gauss-Kronrod method of integration, Paley-Wiener space, Tikhonov regularization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1484
1152 Complexity Analysis of Some Known Graph Coloring Instances

Authors: Jeffrey L. Duffany

Abstract:

Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.

Keywords: graph coloring, complexity, algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
1151 Assessing Efficiency Trends in the Indian Sugar Industry

Authors: S. P. Singh

Abstract:

This paper measures technical and scale efficiencies of 40 Indian sugar companies for the period from 2004-05 to 2013-14. The efficiencies are estimated through input-oriented DEA models using one output variable—value of output (VOP) and five input variables—capital cost (CA), employee cost (EMP), raw material (RW), energy & fuel (E&F) and other manufacturing expenses (OME). The sugar companies are classified into integrated and non-integrated categories to know which one achieves higher level of efficiency. Sources of inefficiency in the industry are identified through decomposing the overall technical efficiency (TE) into pure technical efficiency (PTE) and scale efficiency (SE). The paper also estimates input-reduction targets for relatively inefficient companies and suggests measures to improve their efficiency level. The findings reveal that the TE does not evince any trend rather it shows fluctuations across years, largely due to erratic and cyclical pattern of sugar production. Further, technical inefficiency in the industry seems to be driven more by the managerial inefficiency than the scale inefficiency, which implies that TE can be improved through better conversion of inputs into output.

Keywords: Sugar industry, companies, technical efficiency, data envelopment analysis, targets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1308
1150 A Black-box Approach for Response Quality Evaluation of Conversational Agent Systems

Authors: Ong Sing Goh, C. Ardil, Wilson Wong, Chun Che Fung

Abstract:

The evaluation of conversational agents or chatterbots question answering systems is a major research area that needs much attention. Before the rise of domain-oriented conversational agents based on natural language understanding and reasoning, evaluation is never a problem as information retrieval-based metrics are readily available for use. However, when chatterbots began to become more domain specific, evaluation becomes a real issue. This is especially true when understanding and reasoning is required to cater for a wider variety of questions and at the same time to achieve high quality responses. This paper discusses the inappropriateness of the existing measures for response quality evaluation and the call for new standard measures and related considerations are brought forward. As a short-term solution for evaluating response quality of conversational agents, and to demonstrate the challenges in evaluating systems of different nature, this research proposes a blackbox approach using observation, classification scheme and a scoring mechanism to assess and rank three example systems, AnswerBus, START and AINI.

Keywords: Evaluation, conversational agents, Response Quality, chatterbots

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900
1149 Introducing a Platform for Encryption Algorithms

Authors: Ahmad Habibizad Navin, Yasaman Hashemi, Omid Mirmotahari

Abstract:

In this paper, we introduce a novel platform encryption method, which modify its keys and random number generators step by step during encryption algorithms. According to complexity of the proposed algorithm, it was safer than any other method.

Keywords: Decryption, Encryption, Algorithm, security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
1148 Main Cause of Children's Deaths in Indigenous Wayuu Community from Department of La Guajira: A Research Developed through Data Mining Use

Authors: Isaura Esther Solano Núñez, David Suarez

Abstract:

The main purpose of this research is to discover what causes death in children of the Wayuu community, and deeply analyze those results in order to take corrective measures to properly control infant mortality. We consider important to determine the reasons that are producing early death in this specific type of population, since they are the most vulnerable to high risk environmental conditions. In this way, the government, through competent authorities, may develop prevention policies and the right measures to avoid an increase of this tragic fact. The methodology used to develop this investigation is data mining, which consists in gaining and examining large amounts of data to produce new and valuable information. Through this technique it has been possible to determine that the child population is dying mostly from malnutrition. In short, this technique has been very useful to develop this study; it has allowed us to transform large amounts of information into a conclusive and important statement, which has made it easier to take appropriate steps to resolve a particular situation.

Keywords: Malnutrition, datamining, analytical, descriptive, population, wayuu, indigenous.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 654
1147 Individual Differences and Paired Learning in Virtual Environments

Authors: Patricia M. Boechler, Heather M. Gautreau

Abstract:

In this research study, postsecondary students completed an information learning task in an avatar-based 3D virtual learning environment. Three factors were of interest in relation to learning; 1) the influence of collaborative vs. independent conditions, 2) the influence of the spatial arrangement of the virtual environment (linear, random and clustered), and 3) the relationship of individual differences such as spatial skill, general computer experience and video game experience to learning. Students completed pretest measures of prior computer experience and prior spatial skill. Following the premeasure administration, students were given instruction to move through the virtual environment and study all the material within 10 information stations. In the collaborative condition, students proceeded in randomly assigned pairs, while in the independent condition they proceeded alone. After this learning phase, all students individually completed a multiple choice test to determine information retention. The overall results indicated that students in pairs did not perform any better or worse than independent students. As far as individual differences, only spatial ability predicted the performance of students. General computer experience and video game experience did not. Taking a closer look at the pairs and spatial ability, comparisons were made on pairs high/matched spatial ability, pairs low/matched spatial ability and pairs that were mismatched on spatial ability. The results showed that both high/matched pairs and mismatched pairs outperformed low/matched pairs. That is, if a pair had even one individual with strong spatial ability they would perform better than pairs with only low spatial ability individuals. This suggests that, in virtual environments, the specific individuals that are paired together are important for performance outcomes. The paper also includes a discussion of trends within the data that have implications for virtual environment education.

Keywords: Avatar-based, virtual environment, paired learning, individual differences.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743
1146 Large Deviations for Lacunary Systems

Authors: Bainian Li, Kongsheng Zhang

Abstract:

Let Xi be a Lacunary System, we established large deviations inequality for Lacunary System. Furthermore, we gained Marcinkiewicz Larger Number Law with dependent random variables sequences.

Keywords: Lacunary system, larger deviations, Locally GeneralizedGaussian, Strong law of large numbers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1048
1145 Proposing Enterprise Wide Information Systems Business Performance Model

Authors: Vineet Kansal

Abstract:

Enterprise Wide Information Systems (EWIS) implementation involves the entire business and will require changes throughout the firm. Because of the scope, complexity and continuous nature of ERP, the project-based approach to managing the implementation process resulted in failure rates of between 60% and 80%. In recent years ERP systems have received much attention. The organizational relevance and risk of ERP projects make it important for organizations to focus on ways to make ERP implementation successful. Once these systems are in place, however, their performance depends on the identified macro variables viz. 'Business Process', 'Decision Making' and 'Individual / Group working'. The questionnaire was designed and administered. The responses from 92 organizations were compiled. The relationship of these variables with EWIS performance is analyzed using inferential statistical measurements. The study helps to understand the performance of model presented. The study suggested in keeping away from the calamities and thereby giving the necessary competitive edge. Whenever some discrepancy is identified during the process of performance appraisal care has to be taken to draft necessary preventive measures. If all these measures are taken care off then the EWIS performance will definitely deliver the results.

Keywords: Enterprise Systems, performance, technology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1267
1144 A Methodology for the Synthesis of Multi-Processors

Authors: Hamid Yasinian

Abstract:

Random epistemologies and hash tables have garnered minimal interest from both security experts and experts in the last several years. In fact, few information theorists would disagree with the evaluation of expert systems. In our research, we discover how flip-flop gates can be applied to the study of superpages. Though such a hypothesis at first glance seems perverse, it is derived from known results.

Keywords: Synthesis, Multi-Processors, Interactive Model, Moor’s Law.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2272
1143 Personal Factors and Career Adaptability in a Call Centre Work Environment: The Mediating Effects of Professional Efficacy

Authors: Nisha Harry

Abstract:

The study discussed in this article sought to assess whether a sense of professional efficacy mediates the relationship between personal factors and career adaptability. A quantitative cross-sectional survey approach was followed. A non–probability sample of (N = 409) of which predominantly early career and permanently employed black females in call centres in Africa participated in this study. In order to assess personal factors, the participants completed sense of meaningfulness and emotional intelligence measures. Measures of professional efficacy and career adaptability were also completed. The results of the mediational analysis revealed that professional efficacy significantly mediates the meaningfulness (sense of coherence) and career adaptability relationship, but not the emotional intelligence–career adaptability relationship. Call centre agents with professional efficacy are likely to be more work engaged as a result of their sense of meaningfulness and emotional intelligence.

Keywords: Call centre, professional efficacy, career adaptability, emotional intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 588
1142 Talent Management through Integration of Talent Value Chain and Human Capital Analytics Approaches

Authors: Wuttigrai Ngamsirijit

Abstract:

Talent management in today’s modern organizations has become data-driven due to a demand for objective human resource decision making and development of analytics technologies. HR managers have been faced with some obstacles in exploiting data and information to obtain their effective talent management decisions. These include process-based data and records; insufficient human capital-related measures and metrics; lack of capabilities in data modeling in strategic manners; and, time consuming to add up numbers and make decisions. This paper proposes a framework of talent management through integration of talent value chain and human capital analytics approaches. It encompasses key data, measures, and metrics regarding strategic talent management decisions along the organizational and talent value chain. Moreover, specific predictive and prescriptive models incorporating these data and information are recommended to help managers in understanding the state of talent, gaps in managing talent and the organization, and the ways to develop optimized talent strategies.    

Keywords: Decision making, human capital analytics, talent management, talent value chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918
1141 The Forensic Swing of Things: The Current Legal and Technical Challenges of IoT Forensics

Authors: Pantaleon Lutta, Mohamed Sedky, Mohamed Hassan

Abstract:

The inability of organizations to put in place management control measures for Internet of Things (IoT) complexities persists to be a risk concern. Policy makers have been left to scamper in finding measures to combat these security and privacy concerns. IoT forensics is a cumbersome process as there is no standardization of the IoT products, no or limited historical data are stored on the devices. This paper highlights why IoT forensics is a unique adventure and brought out the legal challenges encountered in the investigation process. A quadrant model is presented to study the conflicting aspects in IoT forensics. The model analyses the effectiveness of forensic investigation process versus the admissibility of the evidence integrity; taking into account the user privacy and the providers’ compliance with the laws and regulations. Our analysis concludes that a semi-automated forensic process using machine learning, could eliminate the human factor from the profiling and surveillance processes, and hence resolves the issues of data protection (privacy and confidentiality).

Keywords: Cloud forensics, data protection laws, GDPR, IoT forensics, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1028
1140 The Impact of Government Expenditure on Economic Growth: A Study of Asian Countries

Authors: K. P. K. S. Lahirushan, W. G. V. Gunasekara

Abstract:

Main purpose of this study is to identify the impact of government expenditure on economic growth in Asian Countries. Consequently, main objective is to analyze whether government expenditure causes economic growth in Asian countries vice versa and then scrutinizing long-run equilibrium relationship exists between them. The study completely based on secondary data. The methodology being quantitative that includes econometrical techniques of cointegration, panel fixed effects model and granger causality in the context of panel data of Asian countries; Singapore, Malaysia, Thailand, South Korea, Japan, China, Sri Lanka, India and Bhutan with 44 observations in each country, totaling to 396 observations from 1970 to 2013. The model used is the random effects panel OLS model. As with the above methodology, the study found the fascinating outcome. At first, empirical findings exhibit a momentous positive impact of government expenditure on Gross Domestic Production in Asian region. Secondly, government expenditure and economic growth indicate a long-run relationship in Asian countries. In conclusion, there is a unidirectional causality from economic growth to government expenditure and government expenditure to economic growth in Asian countries. Hence the study is validated that it is in line with the Keynesian theory and Wagner’s law as well. Consequently, it can be concluded that role of government would play a vital role in economic growth of Asian Countries. However; if government expenditure did not figure out with the economy’s needs it might be considerably inspiration the economy in a negative way so that society bears the costs.

Keywords: Asian Countries, Government Expenditure, Keynesian theory, Wagner’s theory, Random effects panel OLS model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8960
1139 Towards Innovation Performance among University Staff

Authors: C. S. Quah, S. P. L. Sim

Abstract:

This study examined how individuals in their respective teams contributed to innovation performance besides defining the term of innovation in their own respective views. This study also identified factors that motivated University staff to contribute to the innovation products. In addition, it examined whether there is a significant relationship between professional training level and the length of service among university staff towards innovation and to what extent do the two variables contributed towards innovative products. The significance of this study is that it revealed the strengths and weaknesses of the university staff when contributing to innovation performance. Stratified-random sampling was employed to determine the samples representing the population of lecturers in the study, involving 123 lecturers in one of the local universities in Malaysia. The method employed to analyze the data is through categorizing into themes for the open-ended questions besides using descriptive and inferential statistics for the quantitative data. This study revealed that two types of definition for the term “innovation” exist among the university staff, namely, creation of new product or new approach to do things as well as value-added creative way to upgrade or improve existing process and service to be more efficient. This study found that the most prominent factor that propels them towards innovation is to improve the product in order to benefit users, followed by selfsatisfaction and recognition. This implies that the staff in the organization viewed the creation of innovative products as a process of growth to fulfill the needs of others and also to realize their personal potential. This study also found that there was only a significant relationship between the professional training level and the length of service of 4 - 6 years among the university staff. The rest of the groups based on the length of service showed that there was no significant relationship with the professional training level towards innovation. Moreover, results of the study on directional measures depicted that the relationship for the length of service of 4- 6 years with professional training level among the university staff is quite weak. This implies that good organization management lies on the shoulders of the key leaders who enlighten the path to be followed by the staff.

Keywords: Innovation, length of service, performance, professional training level, motivation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
1138 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient, but not the magnitude. A neural network with two hidden layers was then used to learn the coefficient magnitudes, along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: Quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 80
1137 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications

Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber

Abstract:

Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.

Keywords: Classification, High dimensional data, Machine learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2344
1136 New Highly-Scalable Carbon Nanotube-Reinforced Glasses and Ceramics

Authors: Konstantinos G. Dassios, Guillaume Bonnefont, Gilbert Fantozzi, Theodore E. Matikas, Costas Galiotis

Abstract:

We report herein the development and preliminary mechanical characterization of fully-dense multi-wall carbon nanotube (MWCNT)-reinforced ceramics and glasses based on a completely new methodology termed High Shear Compaction (HSC). The tubes are introduced and bound to the matrix grains by aid of polymeric binders to form flexible green bodies which are sintered and densified by spark plasma sintering to unprecedentedly high densities of 100% of the pure-matrix value. The strategy was validated across a PyrexTM glass / MWCNT composite while no identifiable factors limit application to other types of matrices. Nondestructive evaluation, based on ultrasonics, of the dynamic mechanical properties of the materials including elastic, shear and bulk modulus as well as Poisson’s ratio showed optimum property improvement at 0.5 %wt tube loading while evidence of nanoscalespecific energy dissipative characteristics acting complementary to nanotube bridging and pull-out indicate a high potential in a wide range of reinforcing and multifunctional applications. 

Keywords: Carbon nanotubes, ceramic matrix composites, toughening, ultrasonics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714