Search results for: Multi class Classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3508

Search results for: Multi class Classification

538 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: Dynamic analysis, finite element methods, ship structure, vibration analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2423
537 A Genetic Algorithm Based Permutation and Non-Permutation Scheduling Heuristics for Finite Capacity Material Requirement Planning Problem

Authors: Watchara Songserm, Teeradej Wuttipornpun

Abstract:

This paper presents a genetic algorithm based permutation and non-permutation scheduling heuristics (GAPNP) to solve a multi-stage finite capacity material requirement planning (FCMRP) problem in automotive assembly flow shop with unrelated parallel machines. In the algorithm, the sequences of orders are iteratively improved by the GA characteristics, whereas the required operations are scheduled based on the presented permutation and non-permutation heuristics. Finally, a linear programming is applied to minimize the total cost. The presented GAPNP algorithm is evaluated by using real datasets from automotive companies. The required parameters for GAPNP are intently tuned to obtain a common parameter setting for all case studies. The results show that GAPNP significantly outperforms the benchmark algorithm about 30% on average.

Keywords: Finite capacity MRP, genetic algorithm, linear programming, flow shop, unrelated parallel machines, application in industries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1089
536 Applying Case-Based Reasoning in Supporting Strategy Decisions

Authors: S. M. Seyedhosseini, A. Makui, M. Ghadami

Abstract:

Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.

Keywords: Case based reasoning, Genetic algorithm, Groupdecision making, Product management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2159
535 Interdisciplinary Principles of Field-Like Coordination in the Case of Self-Organized Social Systems1

Authors: D. Plikynas, S. Masteika, A. Budrionis

Abstract:

This interdisciplinary research aims to distinguish universal scale-free and field-like fundamental principles of selforganization observable across many disciplines like computer science, neuroscience, microbiology, social science, etc. Based on these universal principles we provide basic premises and postulates for designing holistic social simulation models. We also introduce pervasive information field (PIF) concept, which serves as a simulation media for contextual information storage, dynamic distribution and organization in social complex networks. PIF concept specifically is targeted for field-like uncoupled and indirect interactions among social agents capable of affecting and perceiving broadcasted contextual information. Proposed approach is expressive enough to represent contextual broadcasted information in a form locally accessible and immediately usable by network agents. This paper gives some prospective vision how system-s resources (tangible and intangible) could be simulated as oscillating processes immersed in the all pervasive information field.

Keywords: field-based coordination, multi-agent systems, information-rich social networks, pervasive information field

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
534 Architectural Stratification and Woody Species Diversity of a Subtropical Forest Grown in a Limestone Habitat in Okinawa Island, Japan

Authors: S. M. Feroz, K. Yoshimura, A. Hagihara

Abstract:

The forest stand consisted of four layers. The species composition between the third and the bottom layers was almost similar, whereas it was almost exclusive between the top and the lower three layers. The values of Shannon-s index H' and Pielou-s index J ' tended to increase from the bottom layer upward, except for H' -value of the top layer. The values of H' and J ' were 4.21 bit and 0.73, respectively, for the total stand. High woody species diversity of the forest depended on large trees in the upper layers, which trend was different from a subtropical evergreen broadleaf forest grown in silicate habitat in the northern part of Okinawa Island. The spatial distribution of trees was overlapped between the third and the bottom layers, whereas it was independent or slightly exclusive between the top and the lower three layers. Mean tree weight of each layer decreased from the top toward the bottom layer, whereas the corresponding tree density increased from the top downward. This relationship was analogous to the process of self-thinning plant populations.

Keywords: Canopy multi-layering, limestone habitat, mean tree weight-density relationship, species diversity, subtropical forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1207
533 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics

Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo

Abstract:

Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.

Keywords: Communication signal, feature extraction, holder coefficient, improved cloud model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 680
532 Tree Based Data Aggregation to Resolve Funneling Effect in Wireless Sensor Network

Authors: G. Rajesh, B. Vinayaga Sundaram, C. Aarthi

Abstract:

In wireless sensor network, sensor node transmits the sensed data to the sink node in multi-hop communication periodically. This high traffic induces congestion at the node which is present one-hop distance to the sink node. The packet transmission and reception rate of these nodes should be very high, when compared to other sensor nodes in the network. Therefore, the energy consumption of that node is very high and this effect is known as the “funneling effect”. The tree based-data aggregation technique (TBDA) is used to reduce the energy consumption of the node. The throughput of the overall performance shows a considerable decrease in the number of packet transmissions to the sink node. The proposed scheme, TBDA, avoids the funneling effect and extends the lifetime of the wireless sensor network. The average case time complexity for inserting the node in the tree is O(n log n) and for the worst case time complexity is O(n2).

Keywords: Data Aggregation, Funneling Effect, Traffic Congestion, Wireless Sensor Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1300
531 Data Mining Applied to the Predictive Model of Triage System in Emergency Department

Authors: Wen-Tsann Lin, Yung-Tsan Jou, Yih-Chuan Wu, Yuan-Du Hsiao

Abstract:

The Emergency Department of a medical center in Taiwan cooperated to conduct the research. A predictive model of triage system is contracted from the contract procedure, selection of parameters to sample screening. 2,000 pieces of data needed for the patients is chosen randomly by the computer. After three categorizations of data mining (Multi-group Discriminant Analysis, Multinomial Logistic Regression, Back-propagation Neural Networks), it is found that Back-propagation Neural Networks can best distinguish the patients- extent of emergency, and the accuracy rate can reach to as high as 95.1%. The Back-propagation Neural Networks that has the highest accuracy rate is simulated into the triage acuity expert system in this research. Data mining applied to the predictive model of the triage acuity expert system can be updated regularly for both the improvement of the system and for education training, and will not be affected by subjective factors.

Keywords: Back-propagation Neural Networks, Data Mining, Emergency Department, Triage System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2287
530 Knowledge Flows and Innovative Performances of NTBFs in Gauteng, South Africa: An Attempt to Explain Mixed Findings in Science Park Research

Authors: Kai-Ying A. Chan, Leon A.G. Oerlemans, Marthinus W. Pretorius

Abstract:

Science parks are often established to drive regional economic growth, especially in countries with emerging economies. However, mixed findings regarding the performances of science park firms are found in the literature. This study tries to explain these mixed findings by taking a relational approach and exploring (un)intended knowledge transfers between new technology-based firms (NTBFs) in the emerging South African economy. Moreover, the innovation outcomes of these NTBFs are examined by using a multi-dimensional construct. Results show that science park location plays a significant role in explaining innovative sales, but is insignificant when a different indicator of innovation outcomes is used. Furthermore, only for innovations that are new to the firms, both science park location and intended knowledge transfer via informal business relationships have a positive impact; whereas social relationships have a negative impact.

Keywords: knowledge flows, innovative performances, science parks, new technology-based firms

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
529 A Two-Phase Mechanism for Agent's Action Selection in Soccer Simulation

Authors: Vahid Salmani, Mahmoud Naghibzadeh, Farid Seifi, Amirhossein Taherinia

Abstract:

Soccer simulation is an effort to motivate researchers and practitioners to do artificial and robotic intelligence research; and at the same time put into practice and test the results. Many researchers and practitioners throughout the world are continuously working to polish their ideas and improve their implemented systems. At the same time, new groups are forming and they bring bright new thoughts to the field. The research includes designing and executing robotic soccer simulation algorithms. In our research, a soccer simulation player is considered to be an intelligent agent that is capable of receiving information from the environment, analyze it and to choose the best action from a set of possible ones, for its next move. We concentrate on developing a two-phase method for the soccer player agent to choose its best next move. The method is then implemented into our software system called Nexus simulation team of Ferdowsi University. This system is based on TsinghuAeolus[1] team that was the champion of the world RoboCup soccer simulation contest in 2001 and 2002.

Keywords: RoboCup, Soccer simulation, multi-agent environment, intelligent soccer agent, ball controller agent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
528 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information

Authors: A. Preetha Priyadharshini, S. B. M. Priya

Abstract:

In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.

Keywords: Imperfect channel state information, outage probability, multiuser- multi input single output.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1100
527 Optimal Planning of Waste-to-Energy through Mixed Integer Linear Programming

Authors: S. T. Tan, H. Hashim, W. S. Ho, C. T. Lee

Abstract:

Rapid economic development and population growth in Malaysia had accelerated the generation of solid waste. This issue gives pressure for effective management of municipal solid waste (MSW) to take place in Malaysia due to the increased cost of landfill. This paper discusses optimal planning of waste-to-energy (WTE) using a combinatorial simulation and optimization model through mixed integer linear programming (MILP) approach. The proposed multi-period model is tested in Iskandar Malaysia (IM) as case study for a period of 12 years (2011 -2025) to illustrate the economic potential and tradeoffs involved in this study. In this paper, 3 scenarios have been used to demonstrate the applicability of the model: (1) Incineration scenario (2) Landfill scenario (3) Optimal scenario. The model revealed that the minimum cost of electricity generation from 9,995,855 tonnes of MSW is estimated as USD 387million with a total electricity generation of 50MW /yr in the optimal scenario.

Keywords: Mixed Integer Linear Programming (MILP), optimization, solid waste management (SWM), Waste-to-energy (WTE).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2961
526 Forest Risk and Vulnerability Assessment: A Case Study from East Bokaro Coal Mining Area in India

Authors: Sujata Upgupta, Prasoon Kumar Singh

Abstract:

The expansion of large scale coal mining into forest areas is a potential hazard for the local biodiversity and wildlife. The objective of this study is to provide a picture of the threat that coal mining poses to the forests of the East Bokaro landscape. The vulnerable forest areas at risk have been assessed and the priority areas for conservation have been presented. The forested areas at risk in the current scenario have been assessed and compared with the past conditions using classification and buffer based overlay approach. Forest vulnerability has been assessed using an analytical framework based on systematic indicators and composite vulnerability index values. The results indicate that more than 4 km2 of forests have been lost from 1973 to 2016. Large patches of forests have been diverted for coal mining projects. Forests in the northern part of the coal field within 1-3 km radius around the coal mines are at immediate risk. The original contiguous forests have been converted into fragmented and degraded forest patches. Most of the collieries are located within or very close to the forests thus threatening the biodiversity and hydrology of the surrounding regions. Based on the vulnerability values estimated, it was concluded that more than 90% of the forested grids in East Bokaro are highly vulnerable to mining. The forests in the sub-districts of Bermo and Chandrapura have been identified as the most vulnerable to coal mining activities. This case study would add to the capacity of the forest managers and mine managers to address the risk and vulnerability of forests at a small landscape level in order to achieve sustainable development.

Keywords: Coal mining, forest, indicators, vulnerability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1144
525 Globalisation, ICTs and National Identity: The Consequences of ICT Policy in Malaysia

Authors: Abd Rasid Abd Rahman

Abstract:

For the past thirty years the Malaysian economy has been said to contribute well to the progress of the nations. However, the intensification of global economy activity and the extensive use of Information Communication Technologies (ICTs) in recent years are challenging government-s effort to further develop Malaysian society. The competition posed by the low wage economies such as China and Vietnam have made the government realise the importance of engaging in high-skill and high technology industries. It is hoped this will be the basis of attracting more foreign direct investment (FDI) in order to help the country to compete in globalised world. Using Vision 2020 as it targeted vision, the government has decided to engage in the use of ICTs and introduce many policies pertaining to it. Mainly based on the secondary analysis approach, the findings show that policy pertaining to ICTs in Malaysia contributes to economic growth, but the consequences of this have resulted in greater division within society. Although some of the divisions such as gender and ethnicity are narrowing down, the gap in important areas such as regions and class differences is becoming wider. The widespread use of ICTs might contribute to the further establishment of democracy in Malaysia, but the increasing number of foreign entities such as FDI and foreign workers, cultural hybridisation and to some extent cultural domination are contributing to neocolonialism in Malaysia. This has obvious consequences for the government-s effort to create a Malaysian national identity. An important finding of this work is that there are contradictions within ICT policy between the effort to develop the economy and society.

Keywords: Globalisation, ICTs, ICT Policy, Malaysia, National Identity, Vision 2020

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
524 Effect of the Internet on Social Capital

Authors: Safaee Safiollah , Javadi Alimohammad, Javadi Maryam

Abstract:

Internet access is a vital part of the modern world and an important tool in the education of our children. It is present in schools, homes and even shopping malls. Mastering the use of the internet is likely to be an important skill for those entering the job markets of the future. An internet user can be anyone he or she wants to be in an online chat room, or play thrilling and challenging games against other players from all corners of the globe. It seems at present time (or near future) for many people relationships in the real world may be neglected as those in the virtual world increase in importance. Internet is provided a fast mode of transportation caused freedom from family bonds and mixing with different cultures and new communities. This research is an attempt to study effect of Internet on Social capital. For this purpose a survey technique on the sample size amounted 168 students of Payame Noor University of Kermanshah city in country of Iran were considered. Degree of social capital is moderate. With the help of the Multi-variable Regression, variables of Iranian message attractive, Interest to internet with effect of positive and variable Creating a cordial atmosphere with negative effect be significant.

Keywords: Internet, Social Capital, social participation Social trust

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
523 The Effects of TiO2 Nanoparticles on Tumor Cell Colonies: Fractal Dimension and Morphological Properties

Authors: T. Sungkaworn, W. Triampo, P. Nalakarn, D. Triampo, I. M. Tang, Y. Lenbury, P. Picha

Abstract:

Semiconductor nanomaterials like TiO2 nanoparticles (TiO2-NPs) approximately less than 100 nm in diameter have become a new generation of advanced materials due to their novel and interesting optical, dielectric, and photo-catalytic properties. With the increasing use of NPs in commerce, to date few studies have investigated the toxicological and environmental effects of NPs. Motivated by the importance of TiO2-NPs that may contribute to the cancer research field especially from the treatment prospective together with the fractal analysis technique, we have investigated the effect of TiO2-NPs on colony morphology in the dark condition using fractal dimension as a key morphological characterization parameter. The aim of this work is mainly to investigate the cytotoxic effects of TiO2-NPs in the dark on the growth of human cervical carcinoma (HeLa) cell colonies from morphological aspect. The in vitro studies were carried out together with the image processing technique and fractal analysis. It was found that, these colonies were abnormal in shape and size. Moreover, the size of the control colonies appeared to be larger than those of the treated group. The mean Df +/- SEM of the colonies in untreated cultures was 1.085±0.019, N= 25, while that of the cultures treated with TiO2-NPs was 1.287±0.045. It was found that the circularity of the control group (0.401±0.071) is higher than that of the treated group (0.103±0.042). The same tendency was found in the diameter parameters which are 1161.30±219.56 μm and 852.28±206.50 μm for the control and treated group respectively. Possible explanation of the results was discussed, though more works need to be done in terms of the for mechanism aspects. Finally, our results indicate that fractal dimension can serve as a useful feature, by itself or in conjunction with other shape features, in the classification of cancer colonies.

Keywords: Tumor growth, Cell colonies, TiO2, Nanoparticles, Fractal, Morphology, Aggregation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1982
522 Predicting Global Solar Radiation Using Recurrent Neural Networks and Climatological Parameters

Authors: Rami El-Hajj Mohamad, Mahmoud Skafi, Ali Massoud Haidar

Abstract:

Several meteorological parameters were used for the  prediction of monthly average daily global solar radiation on  horizontal using recurrent neural networks (RNNs). Climatological  data and measures, mainly air temperature, humidity, sunshine  duration, and wind speed between 1995 and 2007 were used to design  and validate a feed forward and recurrent neural network based  prediction systems. In this paper we present our reference system  based on a feed-forward multilayer perceptron (MLP) as well as the  proposed approach based on an RNN model. The obtained results  were promising and comparable to those obtained by other existing  empirical and neural models. The experimental results showed the  advantage of RNNs over simple MLPs when we deal with time series  solar radiation predictions based on daily climatological data.

Keywords: Recurrent Neural Networks, Global Solar Radiation, Multi-layer perceptron, gradient, Root Mean Square Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2537
521 Designing a Football Team of Robots from Beginning to End

Authors: Maziar A. Sharbafi, Caro Lucas, Aida Mohammadinejad, Mostafa Yaghobi

Abstract:

The Combination of path planning and path following is the main purpose of this paper. This paper describes the developed practical approach to motion control of the MRL small size robots. An intelligent controller is applied to control omni-directional robots motion in simulation and real environment respectively. The Brain Emotional Learning Based Intelligent Controller (BELBIC), based on LQR control is adopted for the omni-directional robots. The contribution of BELBIC in improving the control system performance is shown as application of the emotional learning in a real world problem. Optimizing of the control effort can be achieved in this method too. Next the implicit communication method is used to determine the high level strategies and coordination of the robots. Some simple rules besides using the environment as a memory to improve the coordination between agents make the robots' decision making system. With this simple algorithm our team manifests a desirable cooperation.

Keywords: multi-agent systems (MAS), Emotional learning, MIMO system, BELBIC, LQR, Communication via environment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833
520 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: Artificial neural networks, breast cancer, cancer dataset, classifiers, cervical cancer, F-score, logistic regression, machine learning, precision, recall, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525
519 New Echocardiographic Morphofunctional Diastolic Index (MFDI) in Differentiation of Normal Left Ventricular Filling from Pseudonormal and Restrictive

Authors: N. Nelasov, D. Safonov, M. Babaev, E. Mirzojan, O. Eroshenko, M. Morgunov, A. Erofeeva

Abstract:

We have shown previously that reflected high intensity motion signals (RIMS) can be used for detection of left ventricular (LV) diastolic dysfunction (DD). It is also well known, that left atrial (LA) dimension can be used as a marker of DD. In this study we decided to analyze the diagnostic role of new echocardiographic morphofunctional diastolic index (MFDI) in differentiation of normal filling of LV from pseudonormal and restrictive. MFDI includes LA dimension and velocity of early diastolic component ea of RIMS (MFDI = LA/ea).  

343 healthy subjects and patients with various cardiac pathology underwent dopplerechocardiographic exam. According to the criteria of "Don" classification scheme 155 subjects had signs of normal LV filling (N) and 55 - of pseudonormal and restrictive filling (PN + R). LA dimension was performed in standard manner. RIMS were registered by conventional pulsed wave Doppler from apical 4-chamber view, when the sample volume was positioned between the tips of mitral leaflets. The velocity of early diastolic component of RIMS was measured. After calculation of MFDI mean values of this index in two groups (N and PN + R) were compared. The cutoff value of MFDI for differentiation of patients with N and PN + R was determined.

Mean value of MFDI in subjects with normal filling was 1.38+0.33 and in patients with pseudonormal and restrictive filling 2.43+0.43; p<0.0001. The cutoff value of MFDI > 2.0 separated subjects with normal LV filling from subjects with pseudonormal and restrictive filling with sensitivity 89.1% and specificity 97.4%.

Keywords: Dopplerechocardiography, diastolic dysfunction, left atrium, reflected high intensity motion signals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
518 The Safety of WiMAX Insolid Propellant Rocket Production

Authors: Jiradett K., Ornin S.

Abstract:

With the advance in wireless networking, IEEE 802.16 WiMAX technology has been widely deployed for several applications such as “last mile" broadband service, cellular backhaul, and high-speed enterprise connectivity. As a result, military employed WiMAX as a high-speed wireless connection for data-link because of its point to multi-point and non-line-of-sight (NLOS) capability for many years. However, the risk of using WiMAX is a critical factor in some sensitive area of military applications especially in ammunition manufacturing such as solid propellant rocket production. The US DoD policy states that the following certification requirements are met for WiMAX: electromagnetic effects on the environment (E3) and Hazards of Electromagnetic Radiation to Ordnance (HERO). This paper discuses the Recommended Power Densities and Safe Separation Distance (SSD) for HERO on WiMAX systems deployed on solid propellant rocket production. The result of this research found that WiMAX is safe to operate at close proximity distances to the rocket production based on AF Guidance Memorandum immediately changing AFMAN 91-201.

Keywords: WiMAX, ammunition, explosive, munition, solidpropellant, safety, rocket, missile

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983
517 An Energy-Efficient Distributed Unequal Clustering Protocol for Wireless Sensor Networks

Authors: Sungju Lee, Jangsoo Lee , Hongjoong Sin, Seunghwan Yoo, Sanghyuck Lee, Jaesik Lee, Yongjun Lee, Sungchun Kim

Abstract:

The wireless sensor networks have been extensively deployed and researched. One of the major issues in wireless sensor networks is a developing energy-efficient clustering protocol. Clustering algorithm provides an effective way to prolong the lifetime of a wireless sensor networks. In the paper, we compare several clustering protocols which significantly affect a balancing of energy consumption. And we propose an Energy-Efficient Distributed Unequal Clustering (EEDUC) algorithm which provides a new way of creating distributed clusters. In EEDUC, each sensor node sets the waiting time. This waiting time is considered as a function of residual energy, number of neighborhood nodes. EEDUC uses waiting time to distribute cluster heads. We also propose an unequal clustering mechanism to solve the hot-spot problem. Simulation results show that EEDUC distributes the cluster heads, balances the energy consumption well among the cluster heads and increases the network lifetime.

Keywords: Wireless Sensor Network, Distributed UnequalClustering, Multi-hop, Lifetime.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2471
516 DAMQ-Based Approach for Efficiently Using the Buffer Spaces of a NoC Router

Authors: Mohammad Ali Jabraeil Jamali, Ahmad khademzadeh

Abstract:

In this paper we present high performance dynamically allocated multi-queue (DAMQ) buffer schemes for fault tolerance systems on chip applications that require an interconnection network. Two virtual channels shared the same buffer space. Fault tolerant mechanisms for interconnection networks are becoming a critical design issue for large massively parallel computers. It is also important to high performance SoCs as the system complexity keeps increasing rapidly. On the message switching layer, we make improvement to boost system performance when there are faults involved in the components communication. The proposed scheme is when a node or a physical channel is deemed as faulty, the previous hop node will terminate the buffer occupancy of messages destined to the failed link. The buffer usage decisions are made at switching layer without interactions with higher abstract layer, thus buffer space will be released to messages destined to other healthy nodes quickly. Therefore, the buffer space will be efficiently used in case fault occurs at some nodes.

Keywords: DAMQ, NoC, fault tolerant, odd-even routingalgorithm, buffer space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
515 A New Source Code Auditing Algorithm for Detecting LFI and RFI in PHP Programs

Authors: Seyed Ali Mir Heydari, Mohsen Sayadiharikandeh

Abstract:

Static analysis of source code is used for auditing web applications to detect the vulnerabilities. In this paper, we propose a new algorithm to analyze the PHP source code for detecting LFI and RFI potential vulnerabilities. In our approach, we first define some patterns for finding some functions which have potential to be abused because of unhandled user inputs. More precisely, we use regular expression as a fast and simple method to define some patterns for detection of vulnerabilities. As inclusion functions could be also used in a safe way, there could occur many false positives (FP). The first cause of these FP-s could be that the function does not use a usersupplied variable as an argument. So, we extract a list of usersupplied variables to be used for detecting vulnerable lines of code. On the other side, as vulnerability could spread among the variables like by multi-level assignment, we also try to extract the hidden usersupplied variables. We use the resulted list to decrease the false positives of our method. Finally, as there exist some ways to prevent the vulnerability of inclusion functions, we define also some patterns to detect them and decrease our false positives.

Keywords: User-supplied Variables, hidden user-supplied variables, PHP vulnerabilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2489
514 Investigation of a Hybrid Process: Multipoint Incremental Forming

Authors: Safa Boudhaouia, Mohamed Amen Gahbiche, Eliane Giraud, Wacef Ben Salem, Philippe Dal Santo

Abstract:

Multi-point forming (MPF) and asymmetric incremental forming (ISF) are two flexible processes for sheet metal manufacturing. To take advantages of these two techniques, a hybrid process has been developed: The Multipoint Incremental Forming (MPIF). This process accumulates at once the advantages of each of these last mentioned forming techniques, which makes it a very interesting and particularly an efficient process for single, small, and medium series production. In this paper, an experimental and a numerical investigation of this technique are presented. To highlight the flexibility of this process and its capacity to manufacture standard and complex shapes, several pieces were produced by using MPIF. The forming experiments are performed on a 3-axis CNC machine. Moreover, a numerical model of the MPIF process has been implemented in ABAQUS and the analysis showed a good agreement with experimental results in terms of deformed shape. Furthermore, the use of an elastomeric interpolator allows avoiding classical local defaults like dimples, which are generally caused by the asymmetric contact and also improves the distribution of residual strain. Future works will apply this approach to other alloys used in aeronautic or automotive applications.

Keywords: Incremental forming, numerical simulation, MPIF, multipoint forming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290
513 A Specification-Based Approach for Retrieval of Reusable Business Component for Software Reuse

Authors: Meng Fanchao, Zhan Dechen, Xu Xiaofei

Abstract:

Software reuse can be considered as the most realistic and promising way to improve software engineering productivity and quality. Automated assistance for software reuse involves the representation, classification, retrieval and adaptation of components. The representation and retrieval of components are important to software reuse in Component-Based on Software Development (CBSD). However, current industrial component models mainly focus on the implement techniques and ignore the semantic information about component, so it is difficult to retrieve the components that satisfy user-s requirements. This paper presents a method of business component retrieval based on specification matching to solve the software reuse of enterprise information system. First, a business component model oriented reuse is proposed. In our model, the business data type is represented as sign data type based on XML, which can express the variable business data type that can describe the variety of business operations. Based on this model, we propose specification match relationships in two levels: business operation level and business component level. In business operation level, we use input business data types, output business data types and the taxonomy of business operations evaluate the similarity between business operations. In the business component level, we propose five specification matches between business components. To retrieval reusable business components, we propose the measure of similarity degrees to calculate the similarities between business components. Finally, a business component retrieval command like SQL is proposed to help user to retrieve approximate business components from component repository.

Keywords: Business component, business operation, business data type, specification matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1390
512 Object-Centric Process Mining Using Process Cubes

Authors: Anahita Farhang Ghahfarokhi, Alessandro Berti, Wil M.P. van der Aalst

Abstract:

Process mining provides ways to analyze business processes. Common process mining techniques consider the process as a whole. However, in real-life business processes, different behaviors exist that make the overall process too complex to interpret. Process comparison is a branch of process mining that isolates different behaviors of the process from each other by using process cubes. Process cubes organize event data using different dimensions. Each cell contains a set of events that can be used as an input to apply process mining techniques. Existing work on process cubes assume single case notions. However, in real processes, several case notions (e.g., order, item, package, etc.) are intertwined. Object-centric process mining is a new branch of process mining addressing multiple case notions in a process. To make a bridge between object-centric process mining and process comparison, we propose a process cube framework, which supports process cube operations such as slice and dice on object-centric event logs. To facilitate the comparison, the framework is integrated with several object-centric process discovery approaches.

Keywords: Process mining, multidimensional process mining, multi-perspective business processes, OLAP, process cubes, process discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1082
511 Web-Based Cognitive Writing Instruction (WeCWI): A Hybrid e-Framework for Instructional Design

Authors: Boon Yih Mah

Abstract:

Web-based Cognitive Writing Instruction (WeCWI) is a hybrid e-framework for the development of a web-based instruction (WBI), which contributes towards instructional design and language development. WeCWI divides its contribution in instructional design into macro and micro perspectives. In macro perspective, being a 21st century educator by disseminating knowledge and sharing ideas with the in-class and global learners is initiated. By leveraging the virtue of technology, WeCWI aims to transform an educator into an aggregator, curator, publisher, social networker and ultimately, a web-based instructor. Since the most notable contribution of integrating technology is being a tool of teaching as well as a stimulus for learning, WeCWI focuses on the use of contemporary web tools based on the multiple roles played by the 21st century educator. The micro perspective in instructional design draws attention to the pedagogical approaches focusing on three main aspects: reading, discussion, and writing. With the effective use of pedagogical approaches through free reading and enterprises, technology adds new dimensions and expands the boundaries of learning capacity. Lastly, WeCWI also imparts the fundamental theories and models for web-based instructors’ awareness such as interactionist theory, cognitive information processing (CIP) theory, computer-mediated communication (CMC), e-learning interactionalbased model, inquiry models, sensory mind model, and leaning styles model.

Keywords: WeCWI, instructional discovery, technological discovery, pedagogical discovery, theoretical discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2220
510 Analysis of the Energetic Feature of the Loaded Gait with Variation of the Trunk Flexion Angle

Authors: Ji-il Park, Hyungtae Seo, Jihyuk Park, Kwang jin Choi, Kyung-Soo Kim, Soohyun Kim

Abstract:

The purpose of the research is to investigate the energetic feature of the backpack load on soldier’s gait with variation of the trunk flexion angle. It is believed that the trunk flexion variation of the loaded gait may cause a significant difference in the energy cost which is often in practice in daily life. To this end, seven healthy Korea military personnel participated in the experiment and are tested under three different walking postures comprised of the small, natural and large trunk flexion. There are around 5 degree differences of waist angle between each trunk flexion. The ground reaction forces were collected from the force plates and motion kinematic data are measured by the motion capture system. Based on these data, the impulses, momentums and mechanical works done on the center of body mass (COM) during the double support phase were computed. The result shows that the push-off and heel strike impulse are not relevant to the trunk flexion change, however the mechanical work by the push-off and heel strike were changed by the trunk flexion variation. It is because the vertical velocity of the COM during the double support phase is increased significantly with an increase in the trunk flexion. Therefore, we can know that the gait efficiency of the loaded gait depends on the trunk flexion angle. Also, even though the gravitational impulse and pre-collision momentum are changed by the trunk flexion variation, the after-collision momentum is almost constant regardless of the trunk flexion variation.

Keywords: Loaded gait, collision, impulse, gravity, heel strike, push-off, gait analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817
509 Geosynthetic Reinforced Unpaved Road: Literature Study and Design Example

Authors: D. Jayalakshmi, S. Bhosale

Abstract:

This paper, in its first part, presents the state-of-the-art literature of design approaches for geosynthetic reinforced unpaved roads. The literature starting since 1970 and the critical appraisal of flexible pavement design by Giroud and Han (2004) and Jonathan Fannin (2006) is presented. The design example is illustrated for Indian conditions. The example emphasizes the results computed by Giroud and Han's (2004) design method with the Indian road congress guidelines by IRC SP 72 -2015. The input data considered are related to the subgrade soil condition of Maharashtra State in India. The unified soil classification of the subgrade soil is inorganic clay with high plasticity (CH), which is expansive with a California bearing ratio (CBR) of 2% to 3%. The example exhibits the unreinforced case and geotextile as reinforcement by varying the rut depth from 25 mm to 100 mm. The present result reveals the base thickness for the unreinforced case from the IRC design catalogs is in good agreement with Giroud and Han (2004) approach for a range of 75 mm to 100 mm rut depth. Since Giroud and Han (2004) method is applicable for both reinforced and unreinforced cases, for the same data with appropriate Nc factor, for the same rut depth, the base thickness for the reinforced case has arrived for the Indian condition. From this trial, for the CBR of 2%, the base thickness reduction due to geotextile inclusion is 35%. For the CBR range of 2% to 5% with different stiffness in geosynthetics, the reduction in base course thickness will be evaluated, and the validation will be executed by the full-scale accelerated pavement testing set up at the College of Engineering Pune (COE), India.

Keywords: Base thickness, design approach, equation, full scale accelerated pavement set up, Indian condition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 618