Search results for: Analytic Network Process (ANP)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18937

Search results for: Analytic Network Process (ANP)

14977 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 118
14976 Prediction of Coronary Artery Stenosis Severity Based on Machine Learning Algorithms

Authors: Yu-Jia Jian, Emily Chia-Yu Su, Hui-Ling Hsu, Jian-Jhih Chen

Abstract:

Coronary artery is the major supplier of myocardial blood flow. When fat and cholesterol are deposit in the coronary arterial wall, narrowing and stenosis of the artery occurs, which may lead to myocardial ischemia and eventually infarction. According to the World Health Organization (WHO), estimated 740 million people have died of coronary heart disease in 2015. According to Statistics from Ministry of Health and Welfare in Taiwan, heart disease (except for hypertensive diseases) ranked the second among the top 10 causes of death from 2013 to 2016, and it still shows a growing trend. According to American Heart Association (AHA), the risk factors for coronary heart disease including: age (> 65 years), sex (men to women with 2:1 ratio), obesity, diabetes, hypertension, hyperlipidemia, smoking, family history, lack of exercise and more. We have collected a dataset of 421 patients from a hospital located in northern Taiwan who received coronary computed tomography (CT) angiography. There were 300 males (71.26%) and 121 females (28.74%), with age ranging from 24 to 92 years, and a mean age of 56.3 years. Prior to coronary CT angiography, basic data of the patients, including age, gender, obesity index (BMI), diastolic blood pressure, systolic blood pressure, diabetes, hypertension, hyperlipidemia, smoking, family history of coronary heart disease and exercise habits, were collected and used as input variables. The output variable of the prediction module is the degree of coronary artery stenosis. The output variable of the prediction module is the narrow constriction of the coronary artery. In this study, the dataset was randomly divided into 80% as training set and 20% as test set. Four machine learning algorithms, including logistic regression, stepwise regression, neural network and decision tree, were incorporated to generate prediction results. We used area under curve (AUC) / accuracy (Acc.) to compare the four models, the best model is neural network, followed by stepwise logistic regression, decision tree, and logistic regression, with 0.68 / 79 %, 0.68 / 74%, 0.65 / 78%, and 0.65 / 74%, respectively. Sensitivity of neural network was 27.3%, specificity was 90.8%, stepwise Logistic regression sensitivity was 18.2%, specificity was 92.3%, decision tree sensitivity was 13.6%, specificity was 100%, logistic regression sensitivity was 27.3%, specificity 89.2%. From the result of this study, we hope to improve the accuracy by improving the module parameters or other methods in the future and we hope to solve the problem of low sensitivity by adjusting the imbalanced proportion of positive and negative data.

Keywords: decision support, computed tomography, coronary artery, machine learning

Procedia PDF Downloads 217
14975 Real-Time Detection of Space Manipulator Self-Collision

Authors: Zhang Xiaodong, Tang Zixin, Liu Xin

Abstract:

In order to avoid self-collision of space manipulators during operation process, a real-time detection method is proposed in this paper. The manipulator is fitted into a cylinder enveloping surface, and then the detection algorithm of collision between cylinders is analyzed. The collision model of space manipulator self-links can be detected by using this algorithm in real-time detection during the operation process. To ensure security of the operation, a safety threshold is designed. The simulation and experiment results verify the effectiveness of the proposed algorithm for a 7-DOF space manipulator.

Keywords: space manipulator, collision detection, self-collision, the real-time collision detection

Procedia PDF Downloads 449
14974 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines

Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso

Abstract:

The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.

Keywords: feature extraction, machine learning, OBIA, remote sensing

Procedia PDF Downloads 345
14973 Application of Computer Aided Engineering Tools in Performance Prediction and Fault Detection of Mechanical Equipment of Mining Process Line

Authors: K. Jahani, J. Razavi

Abstract:

Nowadays, to decrease the number of downtimes in the industries such as metal mining, petroleum and chemical industries, predictive maintenance is crucial. In order to have efficient predictive maintenance, knowing the performance of critical equipment of production line such as pumps and hydro-cyclones under variable operating parameters, selecting best indicators of this equipment health situations, best locations for instrumentation, and also measuring of these indicators are very important. In this paper, computer aided engineering (CAE) tools are implemented to study some important elements of copper process line, namely slurry pumps and cyclone to predict the performance of these components under different working conditions. These modeling and simulations can be used in predicting, for example, the damage tolerance of the main shaft of the slurry pump or wear rate and location of cyclone wall or pump case and impeller. Also, the simulations can suggest best-measuring parameters, measuring intervals, and their locations.

Keywords: computer aided engineering, predictive maintenance, fault detection, mining process line, slurry pump, hydrocyclone

Procedia PDF Downloads 389
14972 A Dynamical Approach for Relating Energy Consumption to Hybrid Inventory Level in the Supply Chain

Authors: Benga Ebouele, Thomas Tengen

Abstract:

Due to long lead time, work in process (WIP) inventory can manifest within the supply chain of most manufacturing system. It implies that there are lesser finished good on hand and more in the process because the work remains in the factory too long and cannot be sold to either customers The supply chain of most manufacturing system is then considered as inefficient as it take so much time to produce the finished good. Time consumed in each operation of the supply chain has an associated energy costs. Such phenomena can be harmful for a hybrid inventory system because a lot of space to store these semi-finished goods may be needed and one is not sure about the final energy cost of producing, holding and delivering the good to customers. The principle that reduces waste of energy within the supply chain of most manufacturing firms should therefore be available to all inventory managers in pursuit of profitability. Decision making by inventory managers in this condition is a modeling process, whereby a dynamical approach is used to depict, examine, specify and even operationalize the relationship between energy consumption and hybrid inventory level. The relationship between energy consumption and inventory level is established, which indicates a poor level of control and hence a potential for energy savings.

Keywords: dynamic modelling, energy used, hybrid inventory, supply chain

Procedia PDF Downloads 248
14971 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 90
14970 Solar Power Generation in a Mining Town: A Case Study for Australia

Authors: Ryan Chalk, G. M. Shafiullah

Abstract:

Climate change is a pertinent issue facing governments and societies around the world. The industrial revolution has resulted in a steady increase in the average global temperature. The mining and energy production industries have been significant contributors to this change prompting government to intervene by promoting low emission technology within these sectors. This paper initially reviews the energy problem in Australia and the mining sector with a focus on the energy requirements and production methods utilised in Western Australia (WA). Renewable energy in the form of utility-scale solar photovoltaics (PV) provides a solution to these problems by providing emission-free energy which can be used to supplement the existing natural gas turbines in operation at the proposed site. This research presents a custom renewable solution for the mining site considering the specific township network, local weather conditions, and seasonal load profiles. A summary of the required PV output is presented to supply slightly over 50% of the towns power requirements during the peak (summer) period, resulting in close to full coverage in the trench (winter) period. Dig Silent Power Factory Software has been used to simulate the characteristics of the existing infrastructure and produces results of integrating PV. Large scale PV penetration in the network introduce technical challenges, that includes; voltage deviation, increased harmonic distortion, increased available fault current and power factor. Results also show that cloud cover has a dramatic and unpredictable effect on the output of a PV system. The preliminary analyses conclude that mitigation strategies are needed to overcome voltage deviations, unacceptable levels of harmonics, excessive fault current and low power factor. Mitigation strategies are proposed to control these issues predominantly through the use of high quality, made for purpose inverters. Results show that use of inverters with harmonic filtering reduces the level of harmonic injections to an acceptable level according to Australian standards. Furthermore, the configuration of inverters to supply active and reactive power assist in mitigating low power factor problems. Use of FACTS devices; SVC and STATCOM also reduces the harmonics and improve the power factor of the network, and finally, energy storage helps to smooth the power supply.

Keywords: climate change, mitigation strategies, photovoltaic (PV), power quality

Procedia PDF Downloads 155
14969 Study of Mechanical Properties of Aluminium Alloys on Normal Friction Stir Welding and Underwater Friction Stir Welding for Structural Applications

Authors: Lingaraju Dumpala, Laxmi Mohan Kumar Chintada, Devadas Deepu, Pravin Kumar Yadav

Abstract:

Friction stir welding is the new-fangled and cutting-edge technique in welding applications; it is widely used in the fields of transportation, aerospace, defense, etc. For thriving significant welding joints and properties of friction stir welded components, it is essential to carry out this advanced process in a prescribed systematic procedure. At this moment, Underwater Friction Stir Welding (UFSW) Process is the field of interest to do research work. In the continuous assessment, the study of UFSW process is to comprehend problems occurred in the past and the structure through which the mechanical properties of the welded joints can be value-added and contributes to conclude results an acceptable and resourceful joint. A meticulous criticism is given on how to modify the experimental setup from NFSW to UFSW. It can discern the influence of tool materials, feeds, spindle angle, load, rotational speeds and mechanical properties. By expending the DEFORM-3D simulation software, the achieved outcomes are validated.

Keywords: Underwater Friction Stir Welding(UFSW), Al alloys, mechanical properties, Normal Friction Stir Welding(NFSW)

Procedia PDF Downloads 269
14968 Knowledge Development: How New Information System Technologies Affect Knowledge Development

Authors: Yener Ekiz

Abstract:

Knowledge development is a proactive process that covers collection, analysis, storage and distribution of information that helps to contribute the understanding of the environment. To transfer knowledge correctly and fastly, you have to use new emerging information system technologies. Actionable knowledge is only of value if it is understandable and usable by target users. The purpose of the paper is to enlighten how technology eases and affects the process of knowledge development. While preparing the paper, literature review, survey and interview methodology will be used. The hypothesis is that the technology and knowledge development are inseparable and the technology will formalize the DIKW hierarchy again. As a result, today there is huge data. This data must be classified sharply and quickly.

Keywords: DIKW hierarchy, knowledge development, technology

Procedia PDF Downloads 421
14967 Public Functions of Kazakh Modern Literature

Authors: Erkingul Soltanaeva, Omyrkhan Abdimanuly, Alua Temirbolat

Abstract:

In this article, the public and social functions of literature and art in the Republic of Kazakhstan were analyzed on the basis of formal and informal literary organizations. The external and internal, subjective and objective factors which influenced the modern literary process were determined. The literary forces, their consolidation, types of organization in the art of word were examined. The periods of the literary process as planning, organization, promotion, and evaluation and their leading forces and approaches were analyzed. The right point of view to the language and mentality of the society force will influence to the literary process. The Ministry of Culture, the Writers' Union of RK and various non-governmental organizations are having different events for the promotion of literary process and to glorify literary personalities in the entire territory of Kazakhstan. According to the cultural plan of different state administration, there was a big program in order to publish their literary encyclopedia, to glorify and distribute books of own poets and writers of their region to the country. All of these official measures will increase the reader's interest in the book and will also bring up people to the patriotic education and improve the status of the native language. The professional literary publications such as the newspaper ‘Kazakh literature’, magazine ‘Zhuldyz’, and journal ‘Zhalyn’ materials which were published in the periods 2013-2015 on the basis of statistical analysis of the Kazakh literature topical to the issues and the field of themes are identified and their level of connection with the public situations are defined. The creative freedom, relations between society and the individual, the state of the literature, the problems of advantages and disadvantages were taken into consideration in the same articles. The level of functions was determined through the public role of literature, social feature, personal peculiarities. Now the stages as the literature management planning, organization, motivation, as well as the evaluation are forming and developing in Kazakhstan. But we still need the development of literature management to satisfy the actual requirements of the today’s agenda.

Keywords: literature management, material, literary process, social functions

Procedia PDF Downloads 369
14966 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 159
14965 The Effectiveness of First World Asylum Practices in Deterring Applications, Offering Bureaucratic Deniability, and Violating Human Rights: A Greek Case Study

Authors: Claudia Huerta, Pepijn Doornenbal, Walaa Elsiddig

Abstract:

Rising waves of nationalism around the world have led first-world migration receiving countries to exploit the ambiguity of international refugee law and establish asylum application processes that deter applications, allow for bureaucratic deniability, and violate human rights. This case study of Greek asylum application practices argues that the 'pre-application' asylum process in Greece violates the spirit of international law by making it incredibly difficult for potential asylum seekers to apply for asylum, in essence violating the human rights of thousands of asylum seekers. This study’s focus is on the Greek mainland’s asylum 'pre-application' process, which in 2016 began to require those wishing to apply for asylum to do so during extremely restricted hours via a basic Skype line. The average wait to simply begin the registration process to apply for asylum is 81 days, during which time applicants are forced to live illegally in Greece. This study’s methodology in analyzing the 'pre-application' process consists of hours of interviews with asylum seekers, NGOs, and the Asylum Service office on the ground in Athens, as well as an analysis of the Greek Asylum Service historical asylum registration statistics. This study presents three main findings: the delays associated with the Skype system in Greece are the result of system design, as proven by a statistical analysis of Greek asylum registrations, NGOs have been co-opted by the state to perform state functions during the process, and the government’s use of technology is both purposefully lazy and discriminatory. In conclusion, the study argues that such asylum practices are part of a pattern of first-world migration receiving countries policies’ which discourage asylum seekers from applying and fall short of the standards in international law.

Keywords: asylum, European Union, governance, Greece, irregular, migration, policy, refugee, Skype

Procedia PDF Downloads 112
14964 A New Proposed Framework for the Development of Interface Design for Malaysian Interactive Courseware

Authors: Norfadilah Kamaruddin

Abstract:

This paper introduces a new proposed framework for the development process of interface design for Malaysian interactive courseware by exploring four established model in the recent research literature, existing Malaysian government guidelines and Malaysian developers practices. In particular, the study looks at the stages and practices throughout the development process. Significant effects of each of the stages are explored and documented, and significant interrelationships among them suggested. The results of analysis are proposed as potential model that helps in establishing and designing a new version of Malaysian interactive courseware.

Keywords: development processes, interaction with interface, interface design, social sciences

Procedia PDF Downloads 370
14963 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 154
14962 Analyze and Improve Project Delivery Time Enhancing Business Management System of Review and Approval Process for Project Design Submittals

Authors: Abdulaziz Alnajem, Amit Sharma

Abstract:

Business Case: Project delivery and enhancing activities' completion in the shortest possible time is critical during execution to proceed with the subsequent phases of Procurement, C & C phases of Contracts to have the required Production facilities/Infrastructure in place to achieve the Company strategic objective of 4.0 MBOPD oil production. SOR (Statement of requirement): Design and Engineering phase of Projects execution takes a long time. It is observed that, in most of the cases, company has crossed the Project Design Submittals review time as per the Contract/Company Standards, resulting into delays in projects completion, and cost impact to the company. Study Scope: Scope of the study covers the process from date of first submission of D & E documents by the contractor to final approval by the controlling team to proceed with the procurement of materials. This scope covers projects handled by the company’s project management teams and includes only the internal review process by the company.

Keywords: business management system, project management, oil and gas, analysis, improvement, design, delays

Procedia PDF Downloads 202
14961 SkyCar Rapid Transit System: An Integrated Approach of Modern Transportation Solutions in the New Queen Elizabeth Quay, Perth, Western Australia

Authors: Arfanara Najnin, Michael W. Roach, Jr., Dr. Jianhong Cecilia Xia

Abstract:

The SkyCar Rapid Transit System (SRT) is an innovative intelligent transport system for the sustainable urban transport system. This system will increase the urban area network connectivity and decrease urban area traffic congestion. The SRT system is designed as a suspended Personal Rapid Transit (PRT) system that travels under a guideway 5m above the ground. A driver-less passenger is via pod-cars that hang from slender beams supported by columns that replace existing lamp posts. The beams are setup in a series of interconnecting loops providing non-stop travel from beginning to end to assure journey time. The SRT forward movement is effected by magnetic motors built into the guideway. Passenger stops are at either at line level 5m above the ground or ground level via a spur guideway that curves off the main thoroughfare. The main objective of this paper is to propose an integrated Automated Transit Network (ATN) technology for the future intelligent transport system in the urban built environment. To fulfil the objective a 4D simulated model in the urban built environment has been proposed by using the concept of SRT-ATN system. The methodology for the design, construction and testing parameters of a Technology Demonstrator (TD) for proof of concept and a Simulator (S) has been demonstrated. The completed TD and S will provide an excellent proving ground for the next development stage, the SRT Prototype (PT) and Pilot System (PS). This paper covered by a 4D simulated model in the virtual built environment is to effectively show how the SRT-ATN system works. OpenSim software has been used to develop the model in a virtual environment, and the scenario has been simulated to understand and visualize the proposed SkyCar Rapid Transit Network model. The SkyCar system will be fabricated in a modular form which is easily transported. The system would be installed in increasingly congested city centers throughout the world, as well as in airports, tourist resorts, race tracks and other special purpose for the urban community. This paper shares the lessons learnt from the proposed innovation and provides recommendations on how to improve the future transport system in urban built environment. Safety and security of passengers are prime factors to be considered for this transit system. Design requirements to meet the safety needs to be part of the research and development phase of the project. Operational safety aspects would also be developed during this period. The vehicles, the track and beam systems and stations are the main components that need to be examined in detail for safety and security of patrons. Measures will also be required to protect columns adjoining intersections from errant vehicles in vehicular traffic collisions. The SkyCar Rapid Transit takes advantage of all current disruptive technologies; batteries, sensors and 4G/5G communication and solar energy technologies which will continue to reduce the costs and make the systems more profitable. SkyCar's energy consumption is extremely low compared to other transport systems.

Keywords: SkyCar, rapid transit, Intelligent Transport System (ITS), Automated Transit Network (ATN), urban built environment, 4D Visualization, smart city

Procedia PDF Downloads 201
14960 Environmental Pollution and Treatment Technology

Authors: R. Berrached, H. Ait Mahamed, A. Iddou

Abstract:

Water pollution is nowadays a serious problem, due to the increasing scarcity of water and thus to the impact induced by such pollution on the human health. Various techniques are made use of to deal with water pollution. Among the most used ones, some can be enumerated: the bacterian bed, the activated mud, the Lagunage as biological processes and coagulation-floculation as a physic-chemical process. These processes are very expensive and an treatment efficiency which decreases along with the increase of the initial pollutants’ concentration. This is the reason why research has been reoriented towards the use of a process by adsorption as an alternative solution instead of the other traditional processes. In our study, we have tempted to exploit the characteristics of two metallic hydroxides Al and Fe to purify contaminated water by two industrial dyes SBL blue and SRL-150 orange. Results have shown the efficiency of the two materials on the blue SBL dye.

Keywords: metallic hydroxydes, industrial dyes, purificatıon,

Procedia PDF Downloads 309
14959 System of System Decisions Framework for Cross-Border Railway Projects

Authors: Dimitrios J. Dimitriou, Maria F. Sartzetaki, Anastasia Kalamakidou

Abstract:

Transport infrastructure assets are key components of the national asset portfolio. The decision to invest in a new infrastructure in transports could take from a few years to some decades. This is mainly because of the need to reserve and spent many capitals, the long payback period, the number of the stakeholders involved in the decision process and –many times- the investment and business risks are high. Decision makers and stakeholders need to define the framework and the outputs of the decision process taking into account the project characteristics, the business uncertainties, and the different expectations. Therefore, the decision assessment framework is an essential challenge linked with the key decision factors meet the stakeholder expectations highlighting project trade-offs, financial risks, business uncertainties and market limitations. This paper examines the decision process for new transport infrastructure projects in cross-border regions, where a wide range of stakeholders with different expectation is involved. According to a consequences analysis systemic approach, the relationship of transport infrastructure development, economic system development and stakeholder expectation is analysed. Adopting the on system of system methodological approach, the decision making the framework, variables, inputs and outputs are defined, highlighting the key shareholder’s role and expectations. The application provides the methodology outputs presenting the proposed decision framework for a strategic railway project in north Greece deals with the upgrade of the existing railway corridor connecting Greece, Turkey, and Bulgaria.

Keywords: system of system decision making, managing decisions for transport projects, decision support framework, defining decision process

Procedia PDF Downloads 291
14958 Computational Intelligence and Machine Learning for Urban Drainage Infrastructure Asset Management

Authors: Thewodros K. Geberemariam

Abstract:

The rapid physical expansion of urbanization coupled with aging infrastructure presents a unique decision and management challenges for many big city municipalities. Cities must therefore upgrade and maintain the existing aging urban drainage infrastructure systems to keep up with the demands. Given the overall contribution of assets to municipal revenue and the importance of infrastructure to the success of a livable city, many municipalities are currently looking for a robust and smart urban drainage infrastructure asset management solution that combines management, financial, engineering and technical practices. This robust decision-making shall rely on sound, complete, current and relevant data that enables asset valuation, impairment testing, lifecycle modeling, and forecasting across the multiple asset portfolios. On this paper, predictive computational intelligence (CI) and multi-class machine learning (ML) coupled with online, offline, and historical record data that are collected from an array of multi-parameter sensors are used for the extraction of different operational and non-conforming patterns hidden in structured and unstructured data to determine and produce actionable insight on the current and future states of the network. This paper aims to improve the strategic decision-making process by identifying all possible alternatives; evaluate the risk of each alternative, and choose the alternative most likely to attain the required goal in a cost-effective manner using historical and near real-time urban drainage infrastructure data for urban drainage infrastructures assets that have previously not benefited from computational intelligence and machine learning advancements.

Keywords: computational intelligence, machine learning, urban drainage infrastructure, machine learning, classification, prediction, asset management space

Procedia PDF Downloads 140
14957 Optimization of Scheduling through Altering Layout Using Pro-Model

Authors: Zouhair Issa Ahmed, Ahmed Abdulrasool Ahmed, Falah Hassan Abdulsada

Abstract:

This paper presents a layout of a factory using Pro-Model simulation by choosing the best layout that gives the highest productivity and least work in process. The general problem is to find the best sequence in which jobs pass between the machines which are compatible with the technological constraints and optimal with respect to some performance criteria. The best simulation with Pro-Model program increased productivity and reduced work in process by balancing lines of production compared with the current layout of factory when productivity increased from 45 products to 180 products through 720 hours.

Keywords: scheduling, Pro-Model, simulation, balancing lines of production, layout planning, WIP

Procedia PDF Downloads 614
14956 Non-Revenue Water Management in Palestine

Authors: Samah Jawad Jabari

Abstract:

Water is the most important and valuable resource not only for human life but also for all living things on the planet. The water supply utilities should fulfill the water requirement quantitatively and qualitatively. Drinking water systems are exposed to both natural (hurricanes and flood) and manmade hazards (risks) that are common in Palestine. Non-Revenue Water (NRW) is a manmade risk which remains a major concern in Palestine, as the NRW levels are estimated to be at a high level. In this research, Hebron city water distribution network was taken as a case study to estimate and audit the NRW levels. The research also investigated the state of the existing water distribution system in the study area by investigating the water losses and obtained more information on NRW prevention and management practices. Data and information have been collected from the Palestinian Water Authority (PWA) and Hebron Municipality (HM) archive. In addition to that, a questionnaire has been designed and administered by the researcher in order to collect the necessary data for water auditing. The questionnaire also assessed the views of stakeholder in PWA and HM (staff) on the current status of the NRW in the Hebron water distribution system. The important result obtained by this research shows that NRW in Hebron city was high and in excess of 30%. The main factors that contribute to NRW were the inaccuracies in billing volumes, unauthorized consumption, and the method of estimating consumptions through faulty meters. Policy for NRW reduction is available in Palestine; however, it is clear that the number of qualified staff available to carry out the activities related to leak detection is low, and that there is a lack of appropriate technologies to reduce water losses and undertake sufficient system maintenance, which needs to be improved to enhance the performance of the network and decrease the level of NRW losses.

Keywords: non-revenue water, water auditing, leak detection, water meters

Procedia PDF Downloads 275
14955 Parameter Fitting of the Discrete Element Method When Modeling the DISAMATIC Process

Authors: E. Hovad, J. H. Walther, P. Larsen, J. Thorborg, J. H. Hattel

Abstract:

In sand casting of metal parts for the automotive industry such as brake disks and engine blocks, the molten metal is poured into a sand mold to get its final shape. The DISAMATIC molding process is a way to construct these sand molds for casting of steel parts and in the present work numerical simulations of this process are presented. During the process green sand is blown into a chamber and subsequently squeezed to finally obtain the sand mould. The sand flow is modelled with the Discrete Element method (DEM) and obtaining the correct material parameters for the simulation is the main goal. Different tests will be used to find or calibrate the DEM parameters needed; Poisson ratio, Young modulus, rolling friction coefficient, sliding friction coefficient and coefficient of restitution (COR). The Young modulus and Poisson ratio are found from compression tests of the bulk material and subsequently used in the DEM model according to the Hertz-Mindlin model. The main focus will be on calibrating the rolling resistance and sliding friction in the DEM model with respect to the behavior of “real” sand piles. More specifically, the surface profile of the “real” sand pile will be compared to the sand pile predicted with the DEM for different values of the rolling and sliding friction coefficients. When the DEM parameters are found for the particle-particle (sand-sand) interaction, the particle-wall interaction parameter values are also found. Here the sliding coefficient will be found from experiments and the rolling resistance is investigated by comparing with observations of how the green sand interacts with the chamber wall during experiments and the DEM simulations will be calibrated accordingly. The coefficient of restitution will be tested with different values in the DEM simulations and compared to video footages of the DISAMATIC process. Energy dissipation will be investigated in these simulations for different particle sizes and coefficient of restitution, where scaling laws will be considered to relate the energy dissipation for these parameters. Finally, the found parameter values are used in the overall discrete element model and compared to the video footage of the DISAMATIC process.

Keywords: discrete element method, physical properties of materials, calibration, granular flow

Procedia PDF Downloads 470
14954 The Importance of Organized and Non-Organized Bildung for a Comprehensive Term of Bildung

Authors: Christine Pichler

Abstract:

The German word Bildung in a comprehensive understanding can be defined as the development of the personality and as a process, which lasts from birth, or even before birth, until death. Gaining experience, acquiring abilities and knowledge as a lifelong learning process is what Bildung means. The development of the personality is intransitive because of the personality’s development itself, and transitive because of influences on the formation of a person by individuals and institutions. In public and political discussions, the term Bildung is understood with a constricted usage as education at schools. This leads to the research question, which consequences this limited comprehension of the term Bildung implies and how a comprehensive term of Bildung has to be defined. In discussions, Bildung is limited to its formal part. The limited understanding prevents from accurate analyses and discussions as well as adequate actions. This hypothesis and the research issue will be processed by theoretical analyses of the factors of Bildung, guideline-controlled expert interviews and a qualitative content analysis. The limited understanding on the term Bildung is a methodological problem. This results in inaccuracies in the analysis of the processes of Bildung and their effects on the development of personality structures. On the one hand, an individual is influenced by formal structures in the system of Bildung (e.g. schools) and on the other hand an individual is influenced by gained individual and informal personality and character attributes. In general, too little attention is given to these attributes and individual qualifications. The aim of this work is to demonstrate informative terms so the educational process with all its facets could be considered and applicable analyses can be made. If the informative terms can be defined, it´s also possible to identify and discuss the components of a comprehensive term Bildung to enable correct action.

Keywords: Bildung, development of personality, education, formative process, organized and non-organized Bildung

Procedia PDF Downloads 110
14953 Orthogonal Metal Cutting Simulation of Steel AISI 1045 via Smoothed Particle Hydrodynamic Method

Authors: Seyed Hamed Hashemi Sohi, Gerald Jo Denoga

Abstract:

Machining or metal cutting is one of the most widely used production processes in industry. The quality of the process and the resulting machined product depends on parameters like tool geometry, material, and cutting conditions. However, the relationships of these parameters to the cutting process are often based mostly on empirical knowledge. In this study, computer modeling and simulation using LS-DYNA software and a Smoothed Particle Hydrodynamic (SPH) methodology, was performed on the orthogonal metal cutting process to analyze three-dimensional deformation of AISI 1045 medium carbon steel during machining. The simulation was performed using the following constitutive models: the Power Law model, the Johnson-Cook model, and the Zerilli-Armstrong models (Z-A). The outcomes were compared against the simulated results obtained by Cenk Kiliçaslan using the Finite Element Method (FEM) and the empirical results of Jaspers and Filice. The analysis shows that the SPH method combined with the Zerilli-Armstrong constitutive model is a viable alternative to simulating the metal cutting process. The tangential force was overestimated by 7%, and the normal force was underestimated by 16% when compared with empirical values. The simulation values for flow stress versus strain at various temperatures were also validated against empirical values. The SPH method using the Z-A model has also proven to be robust against issues of time-scaling. Experimental work was also done to investigate the effects of friction, rake angle and tool tip radius on the simulation.

Keywords: metal cutting, smoothed particle hydrodynamics, constitutive models, experimental, cutting forces analyses

Procedia PDF Downloads 247
14952 The Need for a Tool to Support Users of E-Science Infrastructures in a Virtual Laboratory Environment

Authors: Hashim Chunpir

Abstract:

Support processes play an important role to facilitate researchers (users) to accomplish their research activities with the help of cyber-infrastructure(s). However, the current user-support process in cyber-infrastructure needs a feasible tool to support users. This tool must enable the users of a cyber-infrastructure to communicate efficiently with the staffs of a cyber-infrastructure in order to get technical and scientific assistance, whilst saving resources at the same time. This research paper narrates the real story of employing various forms of tools to support the user and staff communication. In addition, this paper projects the lessons learned from an exploration of the help-desk tools in the current state of user support process in Earth System Grid Federation (ESGF) from support staffs’ perspective. ESGF is a climate cyber-infrastructure that facilitates Earth System Modeling (ESM) and is taken as a case study in this paper. Finally, this study proposes a need for a tool, a framework or a platform that not only improves the user support process to address support servicing needs of end-users of e-Science infrastructures but also eases the life of staffs in providing assistance to the users. With the help of such a tool; the collaboration between users and the staffs of cyber-infrastructures is made easier. Consequently, the research activities of the users of e-Science infrastructure will thrive as the scientific and technical support will be available to users. Finally, this results into painless and productive e-Research.

Keywords: e-Science User Services, e-Research in Earth Sciences, Information Technology Services Management (ITSM), user support process, service desk, management of support activities, help desk tools, application of social media

Procedia PDF Downloads 463
14951 System Identification and Quantitative Feedback Theory Design of a Lathe Spindle

Authors: M. Khairudin

Abstract:

This paper investigates the system identification and design quantitative feedback theory (QFT) for the robust control of a lathe spindle. The dynamic of the lathe spindle is uncertain and time variation due to the deepness variation on cutting process. System identification was used to obtain the dynamics model of the lathe spindle. In this work, real time system identification is used to construct a linear model of the system from the nonlinear system. These linear models and its uncertainty bound can then be used for controller synthesis. The real time nonlinear system identification process to obtain a set of linear models of the lathe spindle that represents the operating ranges of the dynamic system. With a selected input signal, the data of output and response is acquired and nonlinear system identification is performed using Matlab to obtain a linear model of the system. Practical design steps are presented in which the QFT-based conditions are formulated to obtain a compensator and pre-filter to control the lathe spindle. The performances of the proposed controller are evaluated in terms of velocity responses of the the lathe machine spindle in corporating deepness on cutting process.

Keywords: lathe spindle, QFT, robust control, system identification

Procedia PDF Downloads 523
14950 Reservoir-Triggered Seismicity of Water Level Variation in the Lake Aswan

Authors: Abdel-Monem Sayed Mohamed

Abstract:

Lake Aswan is one of the largest man-made reservoirs in the world. The reservoir began to fill in 1964 and the level rose gradually, with annual irrigation cycles, until it reached a maximum water level of 181.5 m in November 1999, with a capacity of 160 km3. The filling of such large reservoir changes the stress system either through increasing vertical compressional stress by loading and/or increased pore pressure through the decrease of the effective normal stress. The resulted effect on fault zones changes stability depending strongly on the orientation of pre-existing stress and geometry of the reservoir/fault system. The main earthquake occurred on November 14, 1981, with magnitude 5.5. This event occurred after 17 years of the reservoir began to fill, along the active part of the Kalabsha fault and located not far from the High Dam. Numerous of small earthquakes follow this earthquake and continue till now. For this reason, 13 seismograph stations (radio-telemetry network short-period seismometers) were installed around the northern part of Lake Aswan. The main purpose of the network is to monitor the earthquake activity continuously within Aswan region. The data described here are obtained from the continuous record of earthquake activity and lake-water level variation through the period from 1982 to 2015. The seismicity is concentrated in the Kalabsha area, where there is an intersection of the easterly trending Kalabsha fault with the northerly trending faults. The earthquake foci are distributed in two seismic zones, shallow and deep in the crust. Shallow events have focal depths of less than 12 km while deep events extend from 12 to 28 km. Correlation between the seismicity and the water level variation in the lake provides great suggestion to distinguish the micro-earthquakes, particularly, those in shallow seismic zone in the reservoir–triggered seismicity category. The water loading is one factor from several factors, as an activating medium in triggering earthquakes. The common factors for all cases of induced seismicity seem to be the presence of specific geological conditions, the tectonic setting and water loading. The role of the water loading is as a supplementary source of earthquake events. So, the earthquake activity in the area originated tectonically (ML ≥ 4) and the water factor works as an activating medium in triggering small earthquakes (ML ≤ 3). Study of the inducing seismicity from the water level variation in Aswan Lake is of great importance and play great roles necessity for the safety of the High Dam body and its economic resources.

Keywords: Aswan lake, Aswan seismic network, seismicity, water level variation

Procedia PDF Downloads 356
14949 Development and State in Brazil: How Do Some Institutions Think and Influence These Issues

Authors: Alessandro Andre Leme

Abstract:

To analyze three Brazilian think tanks: a) Fernando Henrique Foundation; b) Celso Furtado International Center; c) Millennium Institute and how they dispute interpretations about the type of development and State that should be adopted in Brazil. We will make use of Network and content analysis of the sites. The analyzes show a dispute that goes from a defense of ultraliberalism to developmentalism, going through a hybrid between State and Market voiced in each of the Think Tanks.

Keywords: sociopolitical and economic thinking, development, strategies, intellectuals, state

Procedia PDF Downloads 128
14948 Effect of a Mixture of Phenol, O-Cresol, P-Cresol, and M-Cresol on the Nitrifying Process in a Sequencing Batch Reactor

Authors: Adriana Sosa, Susana Rincon, Chérif Ben, Diana Cabañas, Juan E. Ruiz, Alejandro Zepeda

Abstract:

The complex chemical composition (mixtures of ammonium and recalcitrant compounds) of the effluents from the chemical, pharmaceutical and petrochemical industries represents a challenge in their biological treatment. This treatment involves nitrification process that can suffer an inhibition due to the presence of aromatic compounds giving as a result the decrease of the process efficiency. The inhibitory effects on nitrification in the presence of aromatic compounds have already been studied; however a few studies have considered the presence of phenolic compounds in the form of mixtures, which is the form that they are present in real context. For this reason, we realized a kinetic study on the nitrifying process in the presence of different concentrations of a mixture of phenol, o-cresol, m-cresol and p-cresol (0 - 320 mg C/L) in a sequencing batch reactor (SBR). Firstly, the nitrifying process was evaluated in absence of the phenolic mixture (control 1) in a SBR with 2 L working volume and 176 mg/L of nitrogen of microbial protein. Total oxidation of initial ammonium (efficiency; ENH4+ of 100 %) to nitrate (nitrifying yield; YNO3- of 0.95) were obtained with specific rates of ammonium consumption (qN-NH4+) and nitrate production (qN-NO3-) (of 1.11 ± 0.04 h-1 and 0.67 h-1 ± 0.11 respectively. During the phase of acclimation with 40 mg C/L of the phenolic mixture, an inhibitory effect on the nitrifying process was observed, provoking a decrease in ENH4+ and YNO3- (11 and 54 % respectively) as well as in the specific rates (89 y 46 % respectively), being the ammonia oxidizing bacteria (BAO) the most affected. However, in the next cycles without the phenolic mixture (control 2), the nitrifying consortium was able to recover its nitrifying capacity (ENH4+ = 100% and YNO3-=0.98). Afterwards the SBR was fed with 10 mg C/L of the phenolic mixture, obtaining and ENH4+ of 100%, YNO3- and qN-NH4+ 0.62 ± 0.006 and 0.13 ± 0.004 respectively, while the qN-NO3- was 0.49 ± 0.007. Moreover, with the increase of the phenolic concentrations (10-160 mg C/L) and the number of cycles the nitrifying consortium was able to oxidize the ammonia with ENH4+ of 100 % and YNO3- close to 1. However a decrease in the values of the nitrification specific rates and increase in the oxidation in phenolic compounds (70 to 94%) were observed. Finally, in the presence of 320 mg C/L, the nitrifying consortium was able to simultaneously oxidize the ammonia (ENH4+= 100%) and the phenolic mixture (p-cresol>phenol>m-cresol>o-cresol) being the o-cresol the most recalcitrant compound. In all the experiments the use of a SBR allowed a respiratory adaptation of the consortium to oxidize the phenolic mixture achieving greater adaptation of the nitrite-oxidizing bacteria (NOB) than in the ammonia-oxidizing bacteria (AOB).

Keywords: cresol, inhibition, nitrification, phenol, sequencing batch reactor

Procedia PDF Downloads 345