Search results for: Lewis number
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10136

Search results for: Lewis number

8456 Investigating the Minimum RVE Size to Simulate Poly (Propylene carbonate) Composites Reinforced with Cellulose Nanocrystals as a Bio-Nanocomposite

Authors: Hamed Nazeri, Pierre Mertiny, Yongsheng Ma, Kajsa Duke

Abstract:

The background of the present study is the use of environment-friendly biopolymer and biocomposite materials. Among the recently introduced biopolymers, poly (propylene carbonate) (PPC) has been gaining attention. This study focuses on the size of representative volume elements (RVE) in order to simulate PPC composites reinforced by cellulose nanocrystals (CNCs) as a bio-nanocomposite. Before manufacturing nanocomposites, numerical modeling should be implemented to explore and predict mechanical properties, which may be accomplished by creating and studying a suitable RVE. In other studies, modeling of composites with rod shaped fillers has been reported assuming that fillers are unidirectionally aligned. But, modeling of non-aligned filler dispersions is considerably more difficult. This study investigates the minimum RVE size to enable subsequent FEA modeling. The matrix and nano-fillers were modeled using the finite element software ABAQUS, assuming randomly dispersed fillers with a filler mass fraction of 1.5%. To simulate filler dispersion, a Monte Carlo technique was employed. The numerical simulation was implemented to find composite elastic moduli. After commencing the simulation with a single filler particle, the number of particles was increased to assess the minimum number of filler particles that satisfies the requirements for an RVE, providing the composite elastic modulus in a reliable fashion.

Keywords: biocomposite, Monte Carlo method, nanocomposite, representative volume element

Procedia PDF Downloads 446
8455 Re-Examining Contracts in Managing and Exploiting Strategic National Resources: A Case in Divestation Process in the Share Distribution of Mining Corporation in West Nusa Tenggara, Indonesia

Authors: Hayyan ul Haq, Zainal Asikin

Abstract:

This work aims to explore the appropriate solution in solving legal problems stemmed from managing and exploiting strategic natural resources in Indonesia. This discussion will be focused on the exploitation of gold mining, i.e. divestation process in the New Mont Corporation, West Nusa Tenggara. These legal problems relate to the deviation of the national budget regulation, UU. No. 19/2012, and the implementation of the divestastion process, which infringes PP. No. 50/2007 concerning the Impelementation Procedure of Regional Cooperation, which is an implementation regulation of UU No. 1/2004 on State’s Treasury. The cooperation model, have been developed by the Provincial Government, failed to create a permanent legal solution through normative approach. It has merely used practical approach that tends (instant solution), by using some loopholes in the divestation process. The above blunders have accumulated by other secondary legal blunders, i.e. good governance principles, particularly justice, transparency, efficiency, effective principles and competitiveness principle. To solve the above problems, this work offers constitutionalisation of contract that aimed at reviewing and coherencing all deviated contracts, rules and policies that have deprived the national and societies’ interest to optimize the strategic natural resources towards the greatest benefit for the greatest number of people..

Keywords: constitutionalisation of contract, strategic national resources, divestation, the greatest benefit for the greatest number of people, Indonesian Pancasila values

Procedia PDF Downloads 460
8454 Developing a GIS-Based Tool for the Management of Fats, Oils, and Grease (FOG): A Case Study of Thames Water Wastewater Catchment

Authors: Thomas D. Collin, Rachel Cunningham, Bruce Jefferson, Raffaella Villa

Abstract:

Fats, oils and grease (FOG) are by-products of food preparation and cooking processes. FOG enters wastewater systems through a variety of sources such as households, food service establishments, and industrial food facilities. Over time, if no source control is in place, FOG builds up on pipe walls, leading to blockages, and potentially to sewer overflows which are a major risk to the Environment and Human Health. UK water utilities spend millions of pounds annually trying to control FOG. Despite UK legislation specifying that discharge of such material is against the law, it is often complicated for water companies to identify and prosecute offenders. Hence, it leads to uncertainties regarding the attitude to take in terms of FOG management. Research is needed to seize the full potential of implementing current practices. The aim of this research was to undertake a comprehensive study to document the extent of FOG problems in sewer lines and reinforce existing knowledge. Data were collected to develop a model estimating quantities of FOG available for recovery within Thames Water wastewater catchments. Geographical Information System (GIS) software was used in conjunction to integrate data with a geographical component. FOG was responsible for at least 1/3 of sewer blockages in Thames Water waste area. A waste-based approach was developed through an extensive review to estimate the potential for FOG collection and recovery. Three main sources were identified: residential, commercial and industrial. Commercial properties were identified as one of the major FOG producers. The total potential FOG generated was estimated for the 354 wastewater catchments. Additionally, raw and settled sewage were sampled and analysed for FOG (as hexane extractable material) monthly at 20 sewage treatment works (STW) for three years. A good correlation was found with the sampled FOG and population equivalent (PE). On average, a difference of 43.03% was found between the estimated FOG (waste-based approach) and sampled FOG (raw sewage sampling). It was suggested that the approach undertaken could overestimate the FOG available, the sampling could only capture a fraction of FOG arriving at STW, and/or the difference could account for FOG accumulating in sewer lines. Furthermore, it was estimated that on average FOG could contribute up to 12.99% of the primary sludge removed. The model was further used to investigate the relationship between estimated FOG and number of blockages. The higher the FOG potential, the higher the number of FOG-related blockages is. The GIS-based tool was used to identify critical areas (i.e. high FOG potential and high number of FOG blockages). As reported in the literature, FOG was one of the main causes of sewer blockages. By identifying critical areas (i.e. high FOG potential and high number of FOG blockages) the model further explored the potential for source-control in terms of ‘sewer relief’ and waste recovery. Hence, it helped targeting where benefits from implementation of management strategies could be the highest. However, FOG is still likely to persist throughout the networks, and further research is needed to assess downstream impacts (i.e. at STW).

Keywords: fat, FOG, GIS, grease, oil, sewer blockages, sewer networks

Procedia PDF Downloads 211
8453 Comparing Effects of Supervised Exercise Therapy versus Home-Based Exercise Therapy on Low Back Pain Severity, Muscle Strength and Anthropometric Parameters in Patients with Nonspecific Chronic Low Back Pain

Authors: Haleh Dadgostar, Faramarz Akbari, Hosien Vahid Tari, Masoud Solaymani-Dodaran, Mohammad Razi

Abstract:

Introduction: There are a number of exercises-protocols have been applied to improve low back pain. We compared the effect of supervised exercise therapy and home-based exercise therapy among patients with nonspecific chronic low back pain. Methods: 70 patients with nonspecific chronic low back pain were randomly (using a random number generator, excel) divided into two groups to compare the effects of two types of exercise therapy. After a common educational session to learn how to live with low back pain as well as to use core training protocols to strengthen the muscles, the subjects were randomly assigned to follow supervised exercise therapy (n = 31) or home-based exercise therapy (n = 34) for 20 weeks. Results: Although both types of exercise programs resulted in reduced pain, this factor decreased more significantly in supervised exercise program. All scores of fitness improved significantly in supervised exercise group. But only knee extensor strength score was increased in the home base exercise group. Conclusion: Comparing between two types of exercise, supervised group exercise showed more effective than the other one. Reduction in low back pain severity and improvement in muscle flexibility and strength can be more achieved by using a 20-week supervised exercise program compared to the home-based exercise program in patients with nonspecific chronic low back pain.

Keywords: low back pain, anthropometric parameters, supervised exercise therapy, home-based exercise therapy

Procedia PDF Downloads 330
8452 User-Awareness from Eye Line Tracing During Specification Writing to Improve Specification Quality

Authors: Yoshinori Wakatake

Abstract:

Many defects after the release of software packages are caused due to omissions of sufficient test items in test specifications. Poor test specifications are detected by manual review, which imposes a high human load. The prevention of omissions depends on the end-user awareness of test specification writers. If test specifications were written while envisioning the behavior of end-users, the number of omissions in test items would be greatly reduced. The paper pays attention to the point that writers who can achieve it differ from those who cannot in not only the description richness but also their gaze information. It proposes a method to estimate the degree of user-awareness of writers through the analysis of their gaze information when writing test specifications. We conduct an experiment to obtain the gaze information of a writer of the test specifications. Test specifications are automatically classified using gaze information. In this method, a Random Forest model is constructed for the classification. The classification is highly accurate. By looking at the explanatory variables which turn out to be important variables, we know behavioral features to distinguish test specifications of high quality from others. It is confirmed they are pupil diameter size and the number and the duration of blinks. The paper also investigates test specifications automatically classified with gaze information to discuss features in their writing ways in each quality level. The proposed method enables us to automatically classify test specifications. It also prevents test item omissions, because it reveals writing features that test specifications of high quality should satisfy.

Keywords: blink, eye tracking, gaze information, pupil diameter, quality improvement, specification document, user-awareness

Procedia PDF Downloads 66
8451 Extended Intuitionistic Fuzzy VIKOR Method in Group Decision Making: The Case of Vendor Selection Decision

Authors: Nastaran Hajiheydari, Mohammad Soltani Delgosha

Abstract:

Vendor (supplier) selection is a group decision-making (GDM) process, in which, based on some predetermined criteria, the experts’ preferences are provided in order to rank and choose the most desirable suppliers. In the real business environment, our attitudes or our choices would be made in an uncertain and indecisive situation could not be expressed in a crisp framework. Intuitionistic fuzzy sets (IFSs) could handle such situations in the best way. VIKOR method was developed to solve multi-criteria decision-making (MCDM) problems. This method, which is used to determine the compromised feasible solution with respect to the conflicting criteria, introduces a multi-criteria ranking index based on the particular measure of 'closeness' to the 'ideal solution'. Until now, there has been a little investigation of VIKOR with IFS, therefore we extended the intuitionistic fuzzy (IF) VIKOR to solve vendor selection problem under IF GDM environment. The present study intends to develop an IF VIKOR method in a GDM situation. Therefore, a model is presented to calculate the criterion weights based on entropy measure. Then, the interval-valued intuitionistic fuzzy weighted geometric (IFWG) operator utilized to obtain the total decision matrix. In the next stage, an approach based on the positive idle intuitionistic fuzzy number (PIIFN) and negative idle intuitionistic fuzzy number (NIIFN) was developed. Finally, the application of the proposed method to solve a vendor selection problem illustrated.

Keywords: group decision making, intuitionistic fuzzy set, intuitionistic fuzzy entropy measure, vendor selection, VIKOR

Procedia PDF Downloads 157
8450 Assessment of Drought Tolerance Maize Hybrids at Grain Growth Stage in Mediterranean Area

Authors: Ayman El Sabagh, Celaleddin Barutçular, Hirofumi Saneoka

Abstract:

Drought is one of the most serious problems posing a grave threat to cereals production including maize. Maize improvement in drought-stress tolerance poses a great challenge as the global need for food and bio-enegry increases. Thus, the current study was planned to explore the variations and determine the performance of target traits of maize hybrids at grain growth stage under drought conditions during 2014 under Adana, Mediterranean climate conditions, Turkey. Maize hybrids (Sancia, Indaco, 71May69, Aaccel, Calgary, 70May82, 72May80) were evaluated under (irrigated and water stress). Results revealed that, grain yield and yield traits had a negative effects because of water stress conditions compared with the normal irrigation. As well as, based on the result under normal irrigation, the maximum biological yield and harvest index were recorded. According to the differences among hybrids were found that, significant differences were observed among hybrids with respect to yield and yield traits under current research. Based on the results, grain weight had more effect on grain yield than grain number during grain filling growth stage under water stress conditions. In this concern, according to low drought susceptibility index (less grain yield losses), the hybrid (Indaco) was more stable in grain number and grain weight. Consequently, it may be concluded that this hybrid would be recommended for use in the future breeding programs for production of drought tolerant hybrids.

Keywords: drought susceptibility index, grain growth, grain yield, maize, water stress

Procedia PDF Downloads 331
8449 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection

Procedia PDF Downloads 137
8448 Personality Characteristics Managerial Skills and Career Preference

Authors: Dinesh Kumar Srivastava

Abstract:

After liberalization of the economy, technical education has seen rapid growth in India. A large number of institutions are offering various engineering and management programmes. Every year, a number of students complete B. Tech/M. Tech and MBA programmes of different institutes, universities in India and search for jobs in the industry. A large number of companies visit educational institutes for campus placements. These companies are interested in hiring competent managers. Most students show preference for jobs from reputed companies and jobs having high compensation. In this context, this study was conducted to understand career preference of postgraduate students and junior executives. Personality characteristics influence work life as well as personal life. In the last two decades, five factor model of personality has been found to be a valid predictor of job performance and job satisfaction. This approach has received support from studies conducted in different countries. It includes neuroticism, extraversion, and openness to experience, agreeableness, and conscientiousness. Similarly three social needs, namely, achievement, affiliation and power influence motivation and performance in certain job functions. Both approaches have been considered in the study. The objective of the study was first, to analyse the relationship between personality characteristics and career preference of students and executives. Secondly, the study analysed the relationship between personality characteristics and skills of students. Three managerial skills namely, conceptual, human and technical have been considered in the study. The sample size of the study was 266 including postgraduate students and junior executives. Respondents have completed BE/B. Tech/MBA programme. Three dimensions of career preference namely, identity, variety and security and three managerial skills were considered as dependent variables. The results indicated that neuroticism was not related to any dimension of career preference. Extraversion was not related to identity, variety and security. It was positively related to three skills. Openness to experience was positively related to skills. Conscientiousness was positively related to variety. It was positively related to three skills. Similarly, the relationship between social needs and career preference was examined using correlation. The results indicated that need for achievement was positively related to variety, identity and security. Need for achievement was positively related to managerial skills Need for affiliation was positively related to three dimensions of career preference as well as managerial skills Need for power was positively related to three dimensions of career preference and managerial skills Social needs appear to be stronger predictor of career preference and managerial skills than big five traits. Findings have implications for selection process in industry.

Keywords: big five traits, career preference, personality, social needs

Procedia PDF Downloads 274
8447 Transformation of Positron Emission Tomography Raw Data into Images for Classification Using Convolutional Neural Network

Authors: Paweł Konieczka, Lech Raczyński, Wojciech Wiślicki, Oleksandr Fedoruk, Konrad Klimaszewski, Przemysław Kopka, Wojciech Krzemień, Roman Shopa, Jakub Baran, Aurélien Coussat, Neha Chug, Catalina Curceanu, Eryk Czerwiński, Meysam Dadgar, Kamil Dulski, Aleksander Gajos, Beatrix C. Hiesmayr, Krzysztof Kacprzak, łukasz Kapłon, Grzegorz Korcyl, Tomasz Kozik, Deepak Kumar, Szymon Niedźwiecki, Dominik Panek, Szymon Parzych, Elena Pérez Del Río, Sushil Sharma, Shivani Shivani, Magdalena Skurzok, Ewa łucja Stępień, Faranak Tayefi, Paweł Moskal

Abstract:

This paper develops the transformation of non-image data into 2-dimensional matrices, as a preparation stage for classification based on convolutional neural networks (CNNs). In positron emission tomography (PET) studies, CNN may be applied directly to the reconstructed distribution of radioactive tracers injected into the patient's body, as a pattern recognition tool. Nonetheless, much PET data still exists in non-image format and this fact opens a question on whether they can be used for training CNN. In this contribution, the main focus of this paper is the problem of processing vectors with a small number of features in comparison to the number of pixels in the output images. The proposed methodology was applied to the classification of PET coincidence events.

Keywords: convolutional neural network, kernel principal component analysis, medical imaging, positron emission tomography

Procedia PDF Downloads 147
8446 Using Data Mining in Automotive Safety

Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler

Abstract:

Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.

Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact

Procedia PDF Downloads 383
8445 Multi-Objective Optimization for Aircraft Fleet Management: A Parametric Approach

Authors: Xin-Yu Li, Dung-Ying Lin

Abstract:

Fleet availability is a crucial indicator for an aircraft fleet. However, in practice, fleet planning involves many resource and safety constraints, such as annual and monthly flight training targets and maximum engine usage limits. Due to safety considerations, engines must be removed for mandatory maintenance and replacement of key components. This situation is known as the "threshold." The annual number of thresholds is a key factor in maintaining fleet availability. However, the traditional method heavily relies on experience and manual planning, which may result in ineffective engine usage and affect the flight missions. This study aims to address the challenges of fleet planning and availability maintenance in aircraft fleets with resource and safety constraints. The goal is to effectively optimize engine usage and maintenance tasks. This study has four objectives: minimizing the number of engine thresholds, minimizing the monthly lack of flight hours, minimizing the monthly excess of flight hours, and minimizing engine disassembly frequency. To solve the resulting formulation, this study uses parametric programming techniques and ϵ-constraint method to reformulate multi-objective problems into single-objective problems, efficiently generating Pareto fronts. This method is advantageous when handling multiple conflicting objectives. It allows for an effective trade-off between these competing objectives. Empirical results and managerial insights will be provided.

Keywords: aircraft fleet, engine utilization planning, multi-objective optimization, parametric method, Pareto optimality

Procedia PDF Downloads 30
8444 Planning for Brownfield Regeneration in Malaysia: An Integrated Approach in Creating Sustainable Ex-Landfill Redevelopment

Authors: Mazifah Simis, Azahan Awang, Kadir Arifin

Abstract:

The brownfield regeneration is being implemented in developped countries. However, as a group 1 developing country in the South East Asia, the rapid development and increasing number of urban population in Malaysia have urged the needs to incorporate the brownfield regeneration into its physical planning development. The increasing number of urban ex-landfills is seen as a new resource that could overcome the issues of inadequate urban green space provisions. With regards to the new development approach in urban planning, this perception study aims to identify the sustainable planning approach based on what the stakeholders have in mind. Respondents consist of 375 local communities within four urban ex-landfill areas and 61 landscape architect and town planner officers in the Malaysian Local Authorities. Three main objectives are set to be achieved, which are (i) to identify ex-landfill issues that need to be overcome prior to the ex-landfill redevelopment (ii) to identify the most suitable types of ex-landfill redevelopment, and (iii) to identify the priority function for ex-landfill redevelopment as the public parks. From the data gathered through the survey method, the order of priorities based on stakeholders' perception was produced. The results show different perception among the stakeholders, but they agreed to the development of the public park as the main development. Hence, this study attempts to produce an integrated approach as a model for sustainable ex-landfill redevelopment that could be accepted by the stakeholders as a beneficial future development that could change the image of 296 ex-landfills in Malaysia into the urban public parks by the year 2020.

Keywords: brownfield regeneration, ex-landfill redevelopment, integrated approach, stakeholders' perception

Procedia PDF Downloads 355
8443 Patient Care Needs Assessment: An Evidence-Based Process to Inform Quality Care and Decision Making

Authors: Wynne De Jong, Robert Miller, Ross Riggs

Abstract:

Beyond the number of nurses providing care for patients, having nurses with the right skills, experience and education is essential to ensure the best possible outcomes for patients. Research studies continue to link nurse staffing and skill mix with nurse-sensitive patient outcomes; numerous studies clearly show that superior patient outcomes are associated with higher levels of regulated staff. Due to the limited number of tools and processes available to assist nurse leaders with staffing models of care, nurse leaders are constantly faced with the ongoing challenge to ensure their staffing models of care best suit their patient population. In 2009, several hospitals in Ontario, Canada participated in a research study to develop and evaluate an RN/RPN utilization toolkit. The purpose of this study was to develop and evaluate a toolkit for Registered Nurses/Registered Practical Nurses Staff mix decision-making based on the College of Nurses of Ontario, Canada practice standards for the utilization of RNs and RPNs. This paper will highlight how an organization has further developed the Patient Care Needs Assessment (PCNA) questionnaire, a major component of the toolkit. Moreover, it will demonstrate how it has utilized the information from PCNA to clearly identify patient and family care needs, thus providing evidence-based results to assist leaders with matching the best staffing skill mix to their patients.

Keywords: nurse staffing models of care, skill mix, nursing health human resources, patient safety

Procedia PDF Downloads 317
8442 Numerical Investigation of a Spiral Bladed Tidal Turbine

Authors: Mohammad Fereidoonnezhad, Seán Leen, Stephen Nash, Patrick McGarry

Abstract:

From the perspective of research innovation, the tidal energy industry is still in its early stages. While a very small number of turbines have progressed to utility-scale deployment, blade breakage is commonly reported due to the enormous hydrodynamic loading applied to devices. The aim of this study is the development of computer simulation technologies for the design of next-generation fibre-reinforced composite tidal turbines. This will require significant technical advances in the areas of tidal turbine testing and multi-scale computational modelling. The complex turbine blade profiles are designed to incorporate non-linear distributions of airfoil sections to optimize power output and self-starting capability while reducing power fluctuations. A number of candidate blade geometries are investigated, ranging from spiral geometries to parabolic geometries, with blades arranged in both cylindrical and spherical configurations on a vertical axis turbine. A combined blade element theory (BET-start-up model) is developed in MATLAB to perform computationally efficient parametric design optimisation for a range of turbine blade geometries. Finite element models are developed to identify optimal fibre-reinforced composite designs to increase blade strength and fatigue life. Advanced fluid-structure-interaction models are also carried out to compute blade deflections following design optimisation.

Keywords: tidal turbine, composite materials, fluid-structure-interaction, start-up capability

Procedia PDF Downloads 123
8441 Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments

Authors: Lavanya Ratnabala, Robinroy Peter, E. Y. A. Charles

Abstract:

This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials. Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.

Keywords: swarm, path formation, task allocation, Argos, exploration, navigation, sub-goal

Procedia PDF Downloads 44
8440 Damping and Stability Evaluation for the Dynamical Hunting Motion of the Bullet Train Wheel Axle Equipped with Cylindrical Wheel Treads

Authors: Barenten Suciu

Abstract:

Classical matrix calculus and Routh-Hurwitz stability conditions, applied to the snake-like motion of the conical wheel axle, lead to the conclusion that the hunting mode is inherently unstable, and its natural frequency is a complex number. In order to analytically solve such a complicated vibration model, either the inertia terms were neglected, in the model designated as geometrical, or restrictions on the creep coefficients and yawing diameter were imposed, in the so-called dynamical model. Here, an alternative solution is proposed to solve the hunting mode, based on the observation that the bullet train wheel axle is equipped with cylindrical wheels. One argues that for such wheel treads, the geometrical hunting is irrelevant, since its natural frequency becomes nil, but the dynamical hunting is significant since its natural frequency reduces to a real number. Moreover, one illustrates that the geometrical simplification of the wheel causes the stabilization of the hunting mode, since the characteristic quartic equation, derived for conical wheels, reduces to a quadratic equation of positive coefficients, for cylindrical wheels. Quite simple analytical expressions for the damping ratio and natural frequency are obtained, without applying restrictions into the model of contact. Graphs of the time-depending hunting lateral perturbation, including the maximal and inflexion points, are presented both for the critically-damped and the over-damped wheel axles.

Keywords: bullet train, creep, cylindrical wheels, damping, dynamical hunting, stability, vibration analysis

Procedia PDF Downloads 153
8439 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 171
8438 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 292
8437 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 246
8436 Avian Esophagus: A Comparative Microscopic Study In Birds With Different Feeding Habits

Authors: M. P. S. Tomar, Himanshu R. Joshi, P. Jagapathi Ramayya, Rakhi Vaish, A. B. Shrivastav

Abstract:

The morphology of an organ system varies according to the feeding habit, habitat and nature of their life-style. This phenomenon is called adaptation. During evolution these morphological changes make the system species specific so the study on the differential characteristics of them makes the understanding regarding the morpho-physiological adaptation easier. Hence the present study was conducted on esophagus of pariah kite, median egret, goshawk, dove and duck. Esophagus in all birds was comprised of four layers viz. Tunica mucosa, Tunica submucosa, Tunica muscularis and Tunica adventitia. The mucosa of esophagus showed longitudinal folds thus the lumen was irregular. The epithelium was stratified squamous in all birds but in Median egret the cells were large and vacuolated. Among these species very thick epithelium was observed in goshawk and duck but keratinization was highest in dove. The stratum spongiosum was 7-8 layers thick in both Pariah kite and Goshawk. In all birds, the glands were alveolar mucous secreting type. In Median egret and Pariah kite, these were round or oval in shape and with or without lumen depending upon the functional status whereas in Goshawk the shape of the glands varied from spherical / oval to triangular with openings towards the lumen according to the functional activity and in dove these glands were oval in shape. The glands were numerous in number in egret while one or two in each fold in goshawk and less numerous in other three species. The core of the mucosal folds was occupied by the lamina propria and showed large number of collagen fibers and cellular infiltration in pariah kite, egret and dove where as in goshawk and duck, collagen and reticular fibers were fewer and cellular infiltration was lesser. Lamina muscularis was very thick in all species and it was comprised of longitudinally arranged smooth muscle fibers. In Median egret, it was in wavy pattern. Tunica submucosa was very thin in all species. Tunica muscularis was mostly comprised of circular smooth muscle bundles in all species but the longitudinal bundles were very few in number and not continuous. The tunica adventitia was comprised of loose connective tissue fibers containing collagen and elastic fibers with numerous small blood vessels in all species. Further, it was observed that the structure of esophagus in birds varies according to their feeding habits.

Keywords: dove, duck, egret, esophagus, goshawk, kite

Procedia PDF Downloads 441
8435 Bayesian Locally Approach for Spatial Modeling of Visceral Leishmaniasis Infection in Northern and Central Tunisia

Authors: Kais Ben-Ahmed, Mhamed Ali-El-Aroui

Abstract:

This paper develops a Local Generalized Linear Spatial Model (LGLSM) to describe the spatial variation of Visceral Leishmaniasis (VL) infection risk in northern and central Tunisia. The response from each region is a number of affected children less than five years of age recorded from 1996 through 2006 from Tunisian pediatric departments and treated as a poison county level data. The model includes climatic factors, namely averages of annual rainfall, extreme values of low temperatures in winter and high temperatures in summer to characterize the climate of each region according to each continentality index, the pluviometric quotient of Emberger (Q2) to characterize bioclimatic regions and component for residual extra-poison variation. The statistical results show the progressive increase in the number of affected children in regions with high continentality index and low mean yearly rainfull. On the other hand, an increase in pluviometric quotient of Emberger contributed to a significant increase in VL incidence rate. When compared with the original GLSM, Bayesian locally modeling is improvement and gives a better approximation of the Tunisian VL risk estimation. According to the Bayesian approach inference, we use vague priors for all parameters model and Markov Chain Monte Carlo method.

Keywords: generalized linear spatial model, local model, extra-poisson variation, continentality index, visceral leishmaniasis, Tunisia

Procedia PDF Downloads 399
8434 Protection of Victims’ Rights in International Criminal Proceedings

Authors: Irina Belozerova

Abstract:

In the recent years, the number of crimes against peace and humanity has constantly been increasing. The development of the international community is inseparably connected to the compliance with the law which protects the rights and interests of citizens in all of their manifestations. The provisions of the law of criminal procedure are no exception. The rights of the victims of genocide, of the war crimes and the crimes against humanity, require particular attention. These crimes fall within the jurisdiction of the International Criminal Court governed by the Rome Statute of the International Criminal Court. These crimes have the following features. First, any such crime has a mass character and therefore requires specific regulation in the international criminal law and procedure and the national criminal law and procedure of different countries. Second, the victims of such crimes are usually children, women and old people; the entire national, ethnic, racial or religious groups are destroyed. These features influence the classification of victims by the age criterion. Article 68 of the Rome Statute provides for protection of the safety, physical and psychological well-being, dignity and privacy of victims and witnesses and thus determines the procedural status of these persons. However, not all the persons whose rights have been violated by the commission of these crimes acquire the status of victims. This is due to the fact that such crimes affect a huge number of persons and it is impossible to mention them all by name. It is also difficult to assess the entire damage suffered by the victims. While assessing the amount of damages it is essential to take into account physical and moral harm, as well as property damage. The procedural status of victims thus gains an exclusive character. In order to determine the full extent of the damage suffered by the victims it is necessary to collect sufficient evidence. However, it is extremely difficult to collect the evidence that would ensure the full and objective protection of the victims’ rights. While making requests for the collection of evidence, the International Criminal Court faces the problem of protection of national security information. Religious beliefs and the family life of victims are of great importance. In some Islamic countries, it is impossible to question a woman without her husband’s consent which affects the objectivity of her testimony. Finally, the number of victims is quantified by hundreds and thousands. The assessment of these elements demands time and highly qualified work. These factors justify the creation of a mechanism that would help to collect the evidence and establish the truth in the international criminal proceedings. This mechanism will help to impose a just and appropriate punishment for the persons accused of having committed a crime, since, committing the crime, criminals could not misunderstand the outcome of their criminal intent.

Keywords: crimes against humanity, evidence in international criminal proceedings, international criminal proceedings, protection of victims

Procedia PDF Downloads 250
8433 Improvement in Drought Stress Tolerance in Wheat by Arbuscular Mycorrhizal Fungi

Authors: Seema Sangwan, Ekta Narwal, Kannepalli Annapurna

Abstract:

The aim of this study was to determine the effect of arbuscular mycorrhizal fungi (AMF) inoculation on drought stress tolerance in 3 genotypes of wheat subjected to moderate water stress, i.e. HD 3043 (drought tolerant), HD 2987 (drought tolerant), and HD 2967 (drought sensitive). Various growth parameters were studied, e.g. total dry weight, total shoot and root length, root volume, root surface area, grain weight and number, leaf area, chlorophyll content in leaves, relative water content, number of spores and percent colonisation of roots by arbuscular mycorrhizal fungi. Total dry weight, root surface area and chlorophyll content were found to be significantly high in AMF inoculated plants as compared to the non-mycorrhizal ones and also higher in drought-tolerant varieties of wheat as compared to the sensitive variety HD 2967, in moderate water stress treatments. Leakage of electrolytes was lower in case of AMF inoculated stressed plants. Under continuous water stress, leaf water content and leaf area were significantly increased in AMF inoculated plants as compared to un-inoculated stressed plants. Overall, the increased colonisation of roots of wheat by AMF in inoculated plants weather drought tolerant or sensitive could have a beneficial effect in alleviating the harmful effects of water stress in wheat and delaying its senescence.

Keywords: Arbuscular mycorrhizal fungi, wheat, drought, stress

Procedia PDF Downloads 199
8432 Optimal Risk and Financial Stability

Authors: Rahmoune Abdelhaq

Abstract:

Systemic risk is a key concern for central banks charged with safeguarding overall financial stability. In this work, we investigate how systemic risk is affected by the structure of the financial system. We construct banking systems that are composed of a number of banks that are connected by interbank linkages. We then vary the key parameters that define the structure of the financial system — including its level of capitalization, the degree to which banks are connected, the size of interbank exposures and the degree of concentration of the system — and analyses the influence of these parameters on the likelihood of contagious (knock-on) defaults. First, we find that the better-capitalized banks are, the more resilient is the banking system against contagious defaults and this effect is non-linear. Second, the effect of the degree of connectivity is non-monotonic, that is, initially a small increase in connectivity increases the contagion effect; but after a certain threshold value, connectivity improves the ability of a banking system to absorb shocks. Third, the size of interbank liabilities tends to increase the risk of knock-on default, even if banks hold capital against such exposures. Fourth, more concentrated banking systems are shown to be prone to larger systemic risk, all else equal. In an extension to the main analysis, we study how liquidity effects interact with banking structure to produce a greater chance of systemic breakdown. We finally consider how the risk of contagion might depend on the degree of asymmetry (tier) inherent in the structure of the banking system. A number of our results have important implications for public policy, which this paper also draws out. This paper also discusses why bank risk management is needed to get the optimal one.

Keywords: financial stability, contagion, liquidity risk, optimal risk

Procedia PDF Downloads 402
8431 High Prevalence of Multi-drug Resistant Diarrheagenic Escherichia coli among Hospitalised Diarrheal Patients in Kolkata, India

Authors: Debjani Ghosh, Goutam Chowdhury, Prosenjit Samanta, Asish Kumar Mukhopadhyay

Abstract:

Acute diarrhoea caused by diarrheagenic Escherichia coli (DEC) is one of the major public health problem in developing countries, mainly in Asia and Africa. DEC consists of six pathogroups, but the majority of the cases were associated with the three pathogropus, enterotoxigenic E. coli (ETEC), enteroaggregative E. coli (EAEC), and enteropathogenic E. coli (EPEC). Hence, we studied the prevalence and antimicrobial resistance of these three major DEC pathogroups in hospitalized diarrheal patients in Kolkata, India, during 2012-2019 with a large sample size. 8,891 stool samples were processed, and 7.8% of them was identified as DEC infection screened by multiplex PCR, in which ETEC was most common (47.7%) followed by EAEC (38.4%) and EPEC (13.9%). Clinical patient history suggested that children <5 years of age were mostly affected with ETEC and EAEC, whereas people within >5-14 years of age were significantly associated with EPEC and ETEC infections. Antibiogram profile showed a high prevalence of multidrug resistant (MDR) isolates among DEC (56.9%), in which 9% were resistant to antibiotics of six different antimicrobial classes. Screening of the antibiotic resistance conferring genes in DEC showed the presence of blaCTX-M (30.2%) in highest number followed by blaTEM (27.5%), tetB (18%), sul2 (12.6%), strA (11.8%), aadA1 (9.8%), blaOXA-1 (9%), dfrA1 (1.6%) and blaSHV (1.2%) which indicates the existence of mobile genetic elements in those isolates. Therefore, the presence of MDR DEC strains in higher number alarms the public health authorities to take preventive measures before the upsurge of the DEC caused diarrhea cases in near future.

Keywords: diarrheagenic escherichia coli, ETEC, EAEC, EPEC

Procedia PDF Downloads 163
8430 Analysis of Saudi Breast Cancer Patients’ Primary Tumors using Array Comparative Genomic Hybridization

Authors: L. M. Al-Harbi, A. M. Shokry, J. S. M. Sabir, A. Chaudhary, J. Manikandan, K. S. Saini

Abstract:

Breast cancer is the second most common cause of cancer death worldwide and is the most common malignancy among Saudi females. During breast carcinogenesis, a wide-array of cytogenetic changes involving deletions, or amplification, or translocations, of part or whole of chromosome regions have been observed. Because of the limitations of various earlier technologies, newer tools are developed to scan for changes at the genomic level. Recently, Array Comparative Genomic Hybridization (aCGH) technique has been applied for detecting segmental genomic alterations at molecular level. In this study, aCGH was performed on twenty breast cancer tumors and their matching non-tumor (normal) counterparts using the Agilent 2x400K. Several regions were identified to be either amplified or deleted in a tumor-specific manner. Most frequent alterations were amplification of chromosome 1q, chromosome 8q, 20q, and deletions at 16q were also detected. The amplification of genetic events at 1q and 8q were further validated using FISH analysis using probes targeting 1q25 and 8q (MYC gene). The copy number changes at these loci can potentially cause a significant change in the tumor behavior, as deletions in the E-Cadherin (CDH1)-tumor suppressor gene as well as amplification of the oncogenes-Aurora Kinase A. (AURKA) and MYC could make these tumors highly metastatic. This study validates the use of aCGH in Saudi breast cancer patients and sets the foundations necessary for performing larger cohort studies searching for ethnicity-specific biomarkers and gene copy number variations.

Keywords: breast cancer, molecular biology, ecology, environment

Procedia PDF Downloads 377
8429 Rayleigh-Bénard-Taylor Convection of Newtonian Nanoliquid

Authors: P. G. Siddheshwar, T. N. Sakshath

Abstract:

In the paper we make linear and non-linear stability analyses of Rayleigh-Bénard convection of a Newtonian nanoliquid in a rotating medium (called as Rayleigh-Bénard-Taylor convection). Rigid-rigid isothermal boundaries are considered for investigation. Khanafer-Vafai-Lightstone single phase model is used for studying instabilities in nanoliquids. Various thermophysical properties of nanoliquid are obtained using phenomenological laws and mixture theory. The eigen boundary value problem is solved for the Rayleigh number using an analytical method by considering trigonometric eigen functions. We observe that the critical nanoliquid Rayleigh number is less than that of the base liquid. Thus the onset of convection is advanced due to the addition of nanoparticles. So, increase in volume fraction leads to advanced onset and thereby increase in heat transport. The amplitudes of convective modes required for estimating the heat transport are determined analytically. The tri-modal standard Lorenz model is derived for the steady state assuming small scale convective motions. The effect of rotation on the onset of convection and on heat transport is investigated and depicted graphically. It is observed that the onset of convection is delayed due to rotation and hence leads to decrease in heat transport. Hence, rotation has a stabilizing effect on the system. This is due to the fact that the energy of the system is used to create the component V. We observe that the amount of heat transport is less in the case of rigid-rigid isothermal boundaries compared to free-free isothermal boundaries.

Keywords: nanoliquid, rigid-rigid, rotation, single phase

Procedia PDF Downloads 238
8428 Assessment of Landfill Pollution Load on Hydroecosystem by Use of Heavy Metal Bioaccumulation Data in Fish

Authors: Gintarė Sauliutė, Gintaras Svecevičius

Abstract:

Landfill leachates contain a number of persistent pollutants, including heavy metals. They have the ability to spread in ecosystems and accumulate in fish which most of them are classified as top-consumers of trophic chains. Fish are freely swimming organisms; but perhaps, due to their species-specific ecological and behavioral properties, they often prefer the most suitable biotopes and therefore, did not avoid harmful substances or environments. That is why it is necessary to evaluate the persistent pollutant dispersion in hydroecosystem using fish tissue metal concentration. In hydroecosystems of hybrid type (e.g. river-pond-river) the distance from the pollution source could be a perfect indicator of such a kind of metal distribution. The studies were carried out in the Kairiai landfill neighboring hybrid-type ecosystem which is located 5 km east of the Šiauliai City. Fish tissue (gills, liver, and muscle) metal concentration measurements were performed on two types of ecologically-different fishes according to their feeding characteristics: benthophagous (Gibel carp, roach) and predatory (Northern pike, perch). A number of mathematical models (linear, non-linear, using log and other transformations) have been applied in order to identify the most satisfactorily description of the interdependence between fish tissue metal concentration and the distance from the pollution source. However, the only one log-multiple regression model revealed the pattern that the distance from the pollution source is closely and positively correlated with metal concentration in all predatory fish tissues studied (gills, liver, and muscle).

Keywords: bioaccumulation in fish, heavy metals, hydroecosystem, landfill leachate, mathematical model

Procedia PDF Downloads 289
8427 The Automatic Transliteration Model of Images of the Book Hamong Tani Using Statistical Approach

Authors: Agustinus Rudatyo Himamunanto, Anastasia Rita Widiarti

Abstract:

Transliteration using Javanese manuscripts is one of methods to preserve and legate the wealth of literature in the past for the present generation in Indonesia. The transliteration manual process commonly requires philologists and takes a relatively long time. The automatic transliteration process is expected to shorten the time so as to help the works of philologists. The preprocessing and segmentation stage firstly done is used to manage the document images, thus obtaining image script units that will compile input document images free from noise and have the similarity in properties in the thickness, size, and slope. The next stage of characteristic extraction is used to find unique characteristics that will distinguish each Javanese script image. One of characteristics that is used in this research is the number of black pixels in each image units. Each image of Java scripts contained in the data training will undergo the same process similar to the input characters. The system testing was performed with the data of the book Hamong Tani. The book Hamong Tani was selected due to its content, age and number of pages. Those were considered sufficient as a model experimental input. Based on the results of random page automatic transliteration process testing, it was determined that the maximum percentage correctness obtained was 81.53%. The percentage of success was obtained in 32x32 pixel input image size with the 5x5 image window. With regard to the results, it can be concluded that the automatic transliteration model offered is relatively good.

Keywords: Javanese script, character recognition, statistical, automatic transliteration

Procedia PDF Downloads 340