Search results for: estimation algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5185

Search results for: estimation algorithm

1405 Cadaveric Assessment of Kidney Dimensions Among Nigerians - A Preliminary Report

Authors: Rotimi Sunday Ajani, Omowumi Femi-Akinlosotu

Abstract:

Background: The usually paired human kidneys are retroperitoneal urinary organs with some endocrine functions. Standard text books of anatomy ascribe single value to each of the dimension of length, width and thickness. Research questions: These values do not give consideration to racial and genetic variability in human morphology. They may thus be erroneous to students and clinicians working on Nigerians. Objectives: The study aimed at establishing reference values of the kidney length, width and thickness for Nigerians using the cadaveric model. Methodology: The length, width, thickness and weight of sixty kidneys harvested from cadavers of thirty adult Nigerians (Male: Female; 27: 3) were measured. Respective volume was calculated using the ellipsoid formula. Results: The mean length of the kidney was 9.84±0.89 cm (9.63±0.88 {right}; 10.06±0.86 {left}), width- 5.18±0.70 cm (5.21±0.72 {right}; 5.14±0.70 {left}), thickness-3.45±0.56 cm (3.36±0.58 {right}, 3.53±0.55 {left}), weight-125.06±22.34 g (122.36±21.70 {right}; 127.76 ±24.02 {left}) and volume of 95.45± 24.40 cm3 (91.73± 26.84 {right}; 99.17± 25.75 {left}). Discussion: Though the values of the parameters measured were higher for the left kidney (except for the width), they were not statistically significant. The various parameters obtained by this study differ from those of similar studies from other continents. Conclusion: Stating single value for each of the parameter of length, width and thickness of the kidney as currently obtained in textbooks of anatomy may be incomplete information and hence misleading. Thus, there is the need to emphasize racial differences when stating the normal values of kidney dimensions in textbooks of anatomy. Implication for Research and Innovation: The results of the study showed the dimensions of the kidney (length, width and thickness) have interracial vagaries as they were different from those of similar studies and values stated in standard textbooks of human anatomy. Future direction: This is a preliminary report and the study will continue so that more data will be obtained.

Keywords: kidney dimensions, cadaveric estimation, adult nigerians, racial differences

Procedia PDF Downloads 91
1404 Clustering Performance Analysis using New Correlation-Based Cluster Validity Indices

Authors: Nathakhun Wiroonsri

Abstract:

There are various cluster validity measures used for evaluating clustering results. One of the main objectives of using these measures is to seek the optimal unknown number of clusters. Some measures work well for clusters with different densities, sizes and shapes. Yet, one of the weaknesses that those validity measures share is that they sometimes provide only one clear optimal number of clusters. That number is actually unknown and there might be more than one potential sub-optimal option that a user may wish to choose based on different applications. We develop two new cluster validity indices based on a correlation between an actual distance between a pair of data points and a centroid distance of clusters that the two points are located in. Our proposed indices constantly yield several peaks at different numbers of clusters which overcome the weakness previously stated. Furthermore, the introduced correlation can also be used for evaluating the quality of a selected clustering result. Several experiments in different scenarios, including the well-known iris data set and a real-world marketing application, have been conducted to compare the proposed validity indices with several well-known ones.

Keywords: clustering algorithm, cluster validity measure, correlation, data partitions, iris data set, marketing, pattern recognition

Procedia PDF Downloads 102
1403 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison

Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes

Abstract:

Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.

Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution

Procedia PDF Downloads 211
1402 Despiking of Turbulent Flow Data in Gravel Bed Stream

Authors: Ratul Das

Abstract:

The present experimental study insights the decontamination of instantaneous velocity fluctuations captured by Acoustic Doppler Velocimeter (ADV) in gravel-bed streams to ascertain near-bed turbulence for low Reynolds number. The interference between incidental and reflected pulses produce spikes in the ADV data especially in the near-bed flow zone and therefore filtering the data are very essential. Nortek’s Vectrino four-receiver ADV probe was used to capture the instantaneous three-dimensional velocity fluctuations over a non-cohesive bed. A spike removal algorithm based on the acceleration threshold method was applied to note the bed roughness and its influence on velocity fluctuations and velocity power spectra in the carrier fluid. The velocity power spectra of despiked signals with a best combination of velocity threshold (VT) and acceleration threshold (AT) are proposed which ascertained velocity power spectra a satisfactory fit with the Kolmogorov “–5/3 scaling-law” in the inertial sub-range. Also, velocity distributions below the roughness crest level fairly follows a third-degree polynomial series.

Keywords: acoustic doppler velocimeter, gravel-bed, spike removal, reynolds shear stress, near-bed turbulence, velocity power spectra

Procedia PDF Downloads 297
1401 Realistic Testing Procedure of Power Swing Blocking Function in Distance Relay

Authors: Farzad Razavi, Behrooz Taheri, Mohammad Parpaei, Mehdi Mohammadi Ghalesefidi, Siamak Zarei

Abstract:

As one of the major problems in protecting large-dimension power systems, power swing and its effect on distance have caused a lot of damages to energy transfer systems in many parts of the world. Therefore, power swing has gained attentions of many researchers, which has led to invention of different methods for power swing detection. Power swing detection algorithm is highly important in distance relay, but protection relays should have general requirements such as correct fault detection, response rate, and minimization of disturbances in a power system. To ensure meeting the requirements, protection relays need different tests during development, setup, maintenance, configuration, and troubleshooting steps. This paper covers power swing scheme of the modern numerical relay protection, 7sa522 to address the effect of the different fault types on the function of the power swing blocking. In this study, it was shown that the different fault types during power swing cause different time for unblocking distance relay.

Keywords: power swing, distance relay, power system protection, relay test, transient in power system

Procedia PDF Downloads 373
1400 Voice and Head Controlled Intelligent Wheelchair

Authors: Dechrit Maneetham

Abstract:

The aim of this paper was to design a void and head controlled electric power wheelchair (EPW). A novel activate the control system for quadriplegics with voice, head and neck mobility. Head movement has been used as a control interface for people with motor impairments in a range of applications. Acquiring measurements from the module is simplified through a synchronous a motor. Axis measures the two directions namely x and y. At the same time, patients can control the motorized wheelchair using voice signals (forward, backward, turn left, turn right, and stop) given by it self. The model of a dc motor is considered as a speed control by selection of a PID parameters using genetic algorithm. An experimental set-up constructed, which consists of micro controller as controller, a DC motor driven EPW and feedback elements. This paper is tuning methods of parameter for a pulse width modulation (PWM) control system. A speed controller has been designed successfully for closed loop of the dc motor so that the motor runs very closed to the reference speed and angle. Intelligent wheelchair can be used to ensure the person’s voice and head are attending the direction of travel asserted by a conventional, direction and speed control.

Keywords: wheelchair, quadriplegia, rehabilitation , medical devices, speed control

Procedia PDF Downloads 533
1399 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances

Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim

Abstract:

This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.

Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering

Procedia PDF Downloads 180
1398 Remote Observation of Environmental Parameters on the Surface of the Maricunga Salt Flat, Atacama Region, Chile

Authors: Lican Guzmán, José Manuel Lattus, Mariana Cervetto, Mauricio Calderón

Abstract:

Today the estimation of effects produced by climate change in high Andean wetland environments is confronted by big challenges. This study provides a way to an analysis by remote sensing how some Ambiental aspects have evolved on the Maricunga salt flat in the last 30 years, divided into the summer and winter seasons, and if global warming is conditioning these changes. The first step to achieve this goal was the recompilation of geological, hydrological, and morphometric antecedents to ensure an adequate contextualization of its environmental parameters. After this, software processing and analysis of Landsat 5,7 and 8 satellite imagery was required to get the vegetation, water, surface temperature, and soil moisture indexes (NDVI, NDWI, LST, and SMI) in order to see how their spatial-temporal conditions have evolved in the area of study during recent decades. Results show a tendency of regular increase in surface temperature and disponibility of water during both seasons but with slight drought periods during summer. Soil moisture factor behaves as a constant during the dry season and with a tendency to increase during wintertime. Vegetation analysis shows an areal and quality increase of its surface sustained through time that is consistent with the increase of water supply and temperature in the basin mentioned before. Roughly, the effects of climate change can be described as positive for the Maricunga salt flat; however, the lack of exact correlation in dates of the imagery available to remote sensing analysis could be a factor for misleading in the interpretation of results.

Keywords: global warming, geology, SIG, Atacama Desert, Salar de Maricunga, environmental geology, NDVI, SMI, LST, NDWI, Landsat

Procedia PDF Downloads 76
1397 Formulating Anti-Insurgency Curriculum Conceptual and Design Principles for Translation into Anti-Terrorist Curriculum Framework for Muslim Secondary Schools

Authors: Saheed Ahmad Rufai

Abstract:

The growing nature of insurgencies in their various forms in the Muslim world is now of great concern to both the leadership and the citizenry. The high sense of insecurity occasioned by the unpleasant experience has in fact attained an alarming rate in the estimation of both Muslims and non-Muslims alike. Consequently, the situation began to attract contributions from scholars and researchers in security-related fields of humanities and social sciences. However, there is little evidence of contribution to the discourse and the scholarship involved by scholars in the field of education. The purpose of this proposed study is to contribute an education dimension to the growing scholarship on the subject. The study which is situated in the broad scholarship of curriculum making and grounded in both the philosophical and sociological foundations of the curriculum, employs a combination of curriculum criticism and creative synthesis, as methods, in reconstructing Muslim schools’ educational blueprint. The significance of the proposed study lies in its potential to contribute a useful addition to the scholarship of curriculum construction in the context of the Muslim world. The significance also lies in its potential to offer an ameliorative proposal over unnecessary insurgency or militancy thereby paving the way for the enthronement of a regime characterized by peaceful, harmonious and tranquil co-existence among people of diverse orientations and ideological persuasions in the Muslim world. The study is restricted to only the first two stages of curriculum making namely the formulation of philosophy which concerns the articulation of objectives, aims, purposes, goals, and principles, as well as the second stage which covers the translation of such principles to an anti-insurgency secondary school curriculum for the Muslim world.

Keywords: education for conflict resolution, anti-insurgency curriculum principles, peace education, anti-terrorist curriculum framework, curriculum for Muslim secondary schools

Procedia PDF Downloads 214
1396 Optimization of Multiplier Extraction Digital Filter On FPGA

Authors: Shiksha Jain, Ramesh Mishra

Abstract:

One of the most widely used complex signals processing operation is filtering. The most important FIR digital filter are widely used in DSP for filtering to alter the spectrum according to some given specifications. Power consumption and Area complexity in the algorithm of Finite Impulse Response (FIR) filter is mainly caused by multipliers. So we present a multiplier less technique (DA technique). In this technique, precomputed value of inner product is stored in LUT. Which are further added and shifted with number of iterations equal to the precision of input sample. But the exponential growth of LUT with the order of FIR filter, in this basic structure, makes it prohibitive for many applications. The significant area and power reduction over traditional Distributed Arithmetic (DA) structure is presented in this paper, by the use of slicing of LUT to the desired length. An architecture of 16 tap FIR filter is presented, with different length of slice of LUT. The result of FIR Filter implementation on Xilinx ISE synthesis tool (XST) vertex-4 FPGA Tool by using proposed method shows the increase of the maximum frequency, the decrease of the resources as usage saving in area with more number of slices and the reduction dynamic power.

Keywords: multiplier less technique, linear phase symmetric FIR filter, FPGA tool, look up table

Procedia PDF Downloads 385
1395 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality

Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan

Abstract:

Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.

Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application

Procedia PDF Downloads 69
1394 Application of Additive Manufacturing for Production of Optimum Topologies

Authors: Mahdi Mottahedi, Peter Zahn, Armin Lechler, Alexander Verl

Abstract:

Optimal topology of components leads to the maximum stiffness with the minimum material use. For the generation of these topologies, normally algorithms are employed, which tackle manufacturing limitations, at the cost of the optimal result. The global optimum result with penalty factor one, however, cannot be fabricated with conventional methods. In this article, an additive manufacturing method is introduced, in order to enable the production of global topology optimization results. For a benchmark, topology optimization with higher and lower penalty factors are performed. Different algorithms are employed in order to interpret the results of topology optimization with lower factors in many microstructure layers. These layers are then joined to form the final geometry. The algorithms’ benefits are then compared experimentally and numerically for the best interpretation. The findings demonstrate that by implementation of the selected algorithm, the stiffness of the components produced with this method is higher than what could have been produced by conventional techniques.

Keywords: topology optimization, additive manufacturing, 3D-printer, laminated object manufacturing

Procedia PDF Downloads 334
1393 Application of Artificial Neural Network for Prediction of Load-Haul-Dump Machine Performance Characteristics

Authors: J. Balaraju, M. Govinda Raj, C. S. N. Murthy

Abstract:

Every industry is constantly looking for enhancement of its day to day production and productivity. This can be possible only by maintaining the men and machinery at its adequate level. Prediction of performance characteristics plays an important role in performance evaluation of the equipment. Analytical and statistical approaches will take a bit more time to solve complex problems such as performance estimations as compared with software-based approaches. Keeping this in view the present study deals with an Artificial Neural Network (ANN) modelling of a Load-Haul-Dump (LHD) machine to predict the performance characteristics such as reliability, availability and preventive maintenance (PM). A feed-forward-back-propagation ANN technique has been used to model the Levenberg-Marquardt (LM) training algorithm. The performance characteristics were computed using Isograph Reliability Workbench 13.0 software. These computed values were validated using predicted output responses of ANN models. Further, recommendations are given to the industry based on the performed analysis for improvement of equipment performance.

Keywords: load-haul-dump, LHD, artificial neural network, ANN, performance, reliability, availability, preventive maintenance

Procedia PDF Downloads 142
1392 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption

Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko

Abstract:

Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.

Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.

Procedia PDF Downloads 103
1391 Seismicity and Ground Response Analysis for MP Tourism Office in Indore, India

Authors: Deepshikha Shukla, C. H. Solanki, Mayank Desai

Abstract:

In the last few years, it has been observed that earthquake is proving a threat to the scientist across the world. With a large number of earthquakes occurring in day to day life, the threat to life and property has increased manifolds which call for an urgent attention of all the researchers globally to carry out the research in the field of Earthquake Engineering. Any hazard related to the earthquake and seismicity is considered to be seismic hazards. The common forms of seismic hazards are Ground Shaking, Structure Damage, Structural Hazards, Liquefaction, Landslides, Tsunami to name a few. Among all the natural hazards, the most devastating and damaging is the earthquake as all other hazards are triggered only after the occurrence of an earthquake. In order to quantify and estimate the seismicity and seismic hazards, many methods and approaches have been proposed in the past few years. Such approaches are Mathematical, Conventional and Computational. Convex Set Theory, Empirical Green’s Function are some of the Mathematical Approaches whereas the Deterministic and Probabilistic Approaches are the Conventional Approach for the estimation of the seismic Hazards. Ground response and Ground Shaking of a particular area or region plays an important role in the damage caused due to the earthquake. In this paper, seismic study using Deterministic Approach and 1 D Ground Response Analysis has been carried out for Madhya Pradesh Tourism Office in Indore Region in Madhya Pradesh in Central India. Indore lies in the seismic zone III (IS: 1893, 2002) in the Seismic Zoning map of India. There are various faults and lineament in this area and Narmada Some Fault and Gavilgadh fault are the active sources of earthquake in the study area. Deepsoil v6.1.7 has been used to perform the 1 D Linear Ground Response Analysis for the study area. The Peak Ground Acceleration (PGA) of the city ranges from 0.1g to 0.56g.

Keywords: seismicity, seismic hazards, deterministic, probabilistic methods, ground response analysis

Procedia PDF Downloads 158
1390 The Application of Artificial Neural Networks for the Performance Prediction of Evacuated Tube Solar Air Collector with Phase Change Material

Authors: Sukhbir Singh

Abstract:

This paper describes the modeling of novel solar air collector (NSAC) system by using artificial neural network (ANN) model. The objective of the study is to demonstrate the application of the ANN model to predict the performance of the NSAC with acetamide as a phase change material (PCM) storage. Input data set consist of time, solar intensity and ambient temperature wherever as outlet air temperature of NSAC was considered as output. Experiments were conducted between 9.00 and 24.00 h in June and July 2014 underneath the prevailing atmospheric condition of Kurukshetra (city of the India). After that, experimental results were utilized to train the back propagation neural network (BPNN) to predict the outlet air temperature of NSAC. The results of proposed algorithm show that the BPNN is effective tool for the prediction of responses. The BPNN predicted results are 99% in agreement with the experimental results.

Keywords: Evacuated tube solar air collector, Artificial neural network, Phase change material, solar air collector

Procedia PDF Downloads 116
1389 Investigating a Deterrence Function for Work Trips for Perth Metropolitan Area

Authors: Ali Raouli, Amin Chegenizadeh, Hamid Nikraz

Abstract:

The Perth metropolitan area and its surrounding regions have been expanding rapidly in recent decades and it is expected that this growth will continue in the years to come. With this rapid growth and the resulting increase in population, consideration should be given to strategic planning and modelling for the future expansion of Perth. The accurate estimation of projected traffic volumes has always been a major concern for the transport modelers and planners. Development of a reliable strategic transport model depends significantly on the inputs data into the model and the calibrated parameters of the model to reflect the existing situation. Trip distribution is the second step in four-step modelling (FSM) which is complex due to its behavioral nature. Gravity model is the most common method for trip distribution. The spatial separation between the Origin and Destination (OD) zones will be reflected in gravity model by applying deterrence functions which provide an opportunity to include people’s behavior in choosing their destinations based on distance, time and cost of their journeys. Deterrence functions play an important role for distribution of the trips within a study area and would simulate the trip distances and therefore should be calibrated for any particular strategic transport model to correctly reflect the trip behavior within the modelling area. This paper aims to review the most common deterrence functions and propose a calibrated deterrence function for work trips within the Perth Metropolitan Area based on the information obtained from the latest available Household data and Perth and Region Travel Survey (PARTS) data. As part of this study, a four-step transport model using EMME software has been developed for Perth Metropolitan Area to assist with the analysis and findings.

Keywords: deterrence function, four-step modelling, origin destination, transport model

Procedia PDF Downloads 162
1388 A Neural Network Modelling Approach for Predicting Permeability from Well Logs Data

Authors: Chico Horacio Jose Sambo

Abstract:

Recently neural network has gained popularity when come to solve complex nonlinear problems. Permeability is one of fundamental reservoir characteristics system that are anisotropic distributed and non-linear manner. For this reason, permeability prediction from well log data is well suited by using neural networks and other computer-based techniques. The main goal of this paper is to predict reservoir permeability from well logs data by using neural network approach. A multi-layered perceptron trained by back propagation algorithm was used to build the predictive model. The performance of the model on net results was measured by correlation coefficient. The correlation coefficient from testing, training, validation and all data sets was evaluated. The results show that neural network was capable of reproducing permeability with accuracy in all cases, so that the calculated correlation coefficients for training, testing and validation permeability were 0.96273, 0.89991 and 0.87858, respectively. The generalization of the results to other field can be made after examining new data, and a regional study might be possible to study reservoir properties with cheap and very fast constructed models.

Keywords: neural network, permeability, multilayer perceptron, well log

Procedia PDF Downloads 397
1387 Multiclass Support Vector Machines with Simultaneous Multi-Factors Optimization for Corporate Credit Ratings

Authors: Hyunchul Ahn, William X. S. Wong

Abstract:

Corporate credit rating prediction is one of the most important topics, which has been studied by researchers in the last decade. Over the last decade, researchers are pushing the limit to enhance the exactness of the corporate credit rating prediction model by applying several data-driven tools including statistical and artificial intelligence methods. Among them, multiclass support vector machine (MSVM) has been widely applied due to its good predictability. However, heuristics, for example, parameters of a kernel function, appropriate feature and instance subset, has become the main reason for the critics on MSVM, as they have dictate the MSVM architectural variables. This study presents a hybrid MSVM model that is intended to optimize all the parameter such as feature selection, instance selection, and kernel parameter. Our model adopts genetic algorithm (GA) to simultaneously optimize multiple heterogeneous design factors of MSVM.

Keywords: corporate credit rating prediction, Feature selection, genetic algorithms, instance selection, multiclass support vector machines

Procedia PDF Downloads 289
1386 Dual-Channel Reliable Breast Ultrasound Image Classification Based on Explainable Attribution and Uncertainty Quantification

Authors: Haonan Hu, Shuge Lei, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Jijun Tang

Abstract:

This paper focuses on the classification task of breast ultrasound images and conducts research on the reliability measurement of classification results. A dual-channel evaluation framework was developed based on the proposed inference reliability and predictive reliability scores. For the inference reliability evaluation, human-aligned and doctor-agreed inference rationals based on the improved feature attribution algorithm SP-RISA are gracefully applied. Uncertainty quantification is used to evaluate the predictive reliability via the test time enhancement. The effectiveness of this reliability evaluation framework has been verified on the breast ultrasound clinical dataset YBUS, and its robustness is verified on the public dataset BUSI. The expected calibration errors on both datasets are significantly lower than traditional evaluation methods, which proves the effectiveness of the proposed reliability measurement.

Keywords: medical imaging, ultrasound imaging, XAI, uncertainty measurement, trustworthy AI

Procedia PDF Downloads 86
1385 SC-LSH: An Efficient Indexing Method for Approximate Similarity Search in High Dimensional Space

Authors: Sanaa Chafik, Imane Daoudi, Mounim A. El Yacoubi, Hamid El Ouardi

Abstract:

Locality Sensitive Hashing (LSH) is one of the most promising techniques for solving nearest neighbour search problem in high dimensional space. Euclidean LSH is the most popular variation of LSH that has been successfully applied in many multimedia applications. However, the Euclidean LSH presents limitations that affect structure and query performances. The main limitation of the Euclidean LSH is the large memory consumption. In order to achieve a good accuracy, a large number of hash tables is required. In this paper, we propose a new hashing algorithm to overcome the storage space problem and improve query time, while keeping a good accuracy as similar to that achieved by the original Euclidean LSH. The Experimental results on a real large-scale dataset show that the proposed approach achieves good performances and consumes less memory than the Euclidean LSH.

Keywords: approximate nearest neighbor search, content based image retrieval (CBIR), curse of dimensionality, locality sensitive hashing, multidimensional indexing, scalability

Procedia PDF Downloads 319
1384 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization

Authors: Yihao Kuang, Bowen Ding

Abstract:

With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graph and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improve strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain better and more efficient inference effect by introducing PPO into knowledge inference technology.

Keywords: reinforcement learning, PPO, knowledge inference, supervised learning

Procedia PDF Downloads 58
1383 Exploring the Visual Representations of Neon Signs and Its Vernacular Tacit Knowledge of Neon Making

Authors: Brian Kwok

Abstract:

Hong Kong is well-known for its name as "the Pearl of the Orient", due to its spectacular night-view with vast amount of decorative neon lights on the streets. Neon signs are first used as the pervasive media of communication for all kinds of commercial advertising, ranging from movie theatres to nightclubs and department stores, and later appropriated by artists as medium of artwork. As a well-established visual language, it displays texts in bilingual format due to British's colonial influence, which are sometimes arranged in an opposite reading order. Research on neon signs as a visual representation is rare but significant because they are part of people’s collective memories of the unique cityscapes which associate the shifting values of people's daily lives and culture identity. Nevertheless, with the current policy to remove abandoned neon signs, their total number dramatically declines recently. The Buildings Department found an estimation of 120,000 unauthorized signboards (including neon signs) in Hong Kong in 2013, and the removal of such is at a rate of estimated 1,600 per year since 2006. In other words, the vernacular cultural values and historical continuity of neon signs will gradually be vanished if no immediate action is taken in documenting them for the purpose of research and cultural preservation. Therefore, the Hong Kong Neon Signs Archive project was established in June of 2015, and over 100 neon signs are photo-documented so far. By content analysis, this project will explore the two components of neon signs – the use of visual languages and vernacular tacit knowledge of neon makers. It attempts to answer these questions about Hong Kong's neon signs: 'What are the ways in which visual representations are used to produce our cityscapes and streetscapes?'; 'What are the visual languages and conventions of usage in different business types?'; 'What the intact knowledge are applied when producing these visual forms of neon signs?'

Keywords: cityscapes, neon signs, tacit knowledge, visual representation

Procedia PDF Downloads 296
1382 Robust Optimisation Model and Simulation-Particle Swarm Optimisation Approach for Vehicle Routing Problem with Stochastic Demands

Authors: Mohanad Al-Behadili, Djamila Ouelhadj

Abstract:

In this paper, a specific type of vehicle routing problem under stochastic demand (SVRP) is considered. This problem is of great importance because it models for many of the real world vehicle routing applications. This paper used a robust optimisation model to solve the problem along with the novel Simulation-Particle Swarm Optimisation (Sim-PSO) approach. The proposed Sim-PSO approach is based on the hybridization of the Monte Carlo simulation technique with the PSO algorithm. A comparative study between the proposed model and the Sim-PSO approach against other solution methods in the literature has been given in this paper. This comparison including the Analysis of Variance (ANOVA) to show the ability of the model and solution method in solving the complicated SVRP. The experimental results show that the proposed model and Sim-PSO approach has a significant impact on the obtained solution by providing better quality solutions comparing with well-known algorithms in the literature.

Keywords: stochastic vehicle routing problem, robust optimisation model, Monte Carlo simulation, particle swarm optimisation

Procedia PDF Downloads 270
1381 High-Resolution Flood Hazard Mapping Using Two-Dimensional Hydrodynamic Model Anuga: Case Study of Jakarta, Indonesia

Authors: Hengki Eko Putra, Dennish Ari Putro, Tri Wahyu Hadi, Edi Riawan, Junnaedhi Dewa Gede, Aditia Rojali, Fariza Dian Prasetyo, Yudhistira Satya Pribadi, Dita Fatria Andarini, Mila Khaerunisa, Raditya Hanung Prakoswa

Abstract:

Catastrophe risk management can only be done if we are able to calculate the exposed risks. Jakarta is an important city economically, socially, and politically and in the same time exposed to severe floods. On the other hand, flood risk calculation is still very limited in the area. This study has calculated the risk of flooding for Jakarta using 2-Dimensional Model ANUGA. 2-Dimensional model ANUGA and 1-Dimensional Model HEC-RAS are used to calculate the risk of flooding from 13 major rivers in Jakarta. ANUGA can simulate physical and dynamical processes between the streamflow against river geometry and land cover to produce a 1-meter resolution inundation map. The value of streamflow as an input for the model obtained from hydrological analysis on rainfall data using hydrologic model HEC-HMS. The probabilistic streamflow derived from probabilistic rainfall using statistical distribution Log-Pearson III, Normal and Gumbel, through compatibility test using Chi Square and Smirnov-Kolmogorov. Flood event on 2007 is used as a comparison to evaluate the accuracy of model output. Property damage estimations were calculated based on flood depth for 1, 5, 10, 25, 50, and 100 years return period against housing value data from the BPS-Statistics Indonesia, Centre for Research and Development of Housing and Settlements, Ministry of Public Work Indonesia. The vulnerability factor was derived from flood insurance claim. Jakarta's flood loss estimation for the return period of 1, 5, 10, 25, 50, and 100 years, respectively are Rp 1.30 t; Rp 16.18 t; Rp 16.85 t; Rp 21.21 t; Rp 24.32 t; and Rp 24.67 t of the total value of building Rp 434.43 t.

Keywords: 2D hydrodynamic model, ANUGA, flood, flood modeling

Procedia PDF Downloads 270
1380 Modeling of Glycine Transporters in Mammalian Using the Probability Approach

Authors: K. S. Zaytsev, Y. R. Nartsissov

Abstract:

Glycine is one of the key inhibitory neurotransmitters in Central nervous system (CNS) meanwhile glycinergic transmission is highly dependable on its appropriate reuptake from synaptic cleft. Glycine transporters (GlyT) of types 1 and 2 are the enzymes providing glycine transport back to neuronal and glial cells along with Na⁺ and Cl⁻ co-transport. The distribution and stoichiometry of GlyT1 and GlyT2 differ in details, and GlyT2 is more interesting for the research as it reuptakes glycine to neuron cells, whereas GlyT1 is located in glial cells. In the process of GlyT2 activity, the translocation of the amino acid is accompanied with binding of both one chloride and three sodium ions consequently (two sodium ions for GlyT1). In the present study, we developed a computer simulator of GlyT2 and GlyT1 activity based on known experimental data for quantitative estimation of membrane glycine transport. The trait of a single protein functioning was described using the probability approach where each enzyme state was considered separately. Created scheme of transporter functioning realized as a consequence of elemental steps allowed to take into account each event of substrate association and dissociation. Computer experiments using up-to-date kinetic parameters allowed receiving the number of translocated glycine molecules, Na⁺ and Cl⁻ ions per time period. Flexibility of developed software makes it possible to evaluate glycine reuptake pattern in time under different internal characteristics of enzyme conformational transitions. We investigated the behavior of the system in a wide range of equilibrium constant (from 0.2 to 100), which is not determined experimentally. The significant influence of equilibrium constant in the range from 0.2 to 10 on the glycine transfer process is shown. The environmental conditions such as ion and glycine concentrations are decisive if the values of the constant are outside the specified range.

Keywords: glycine, inhibitory neurotransmitters, probability approach, single protein functioning

Procedia PDF Downloads 110
1379 Theoretical and Experimental Investigations of Binary Systems for Hydrogen Storage

Authors: Gauthier Lefevre, Holger Kohlmann, Sebastien Saitzek, Rachel Desfeux, Adlane Sayede

Abstract:

Hydrogen is a promising energy carrier, compatible with the sustainable energy concept. In this context, solid-state hydrogen-storage is the key challenge in developing hydrogen economy. The capability of absorption of large quantities of hydrogen makes intermetallic systems of particular interest. In this study, efforts have been devoted to the theoretical investigation of binary systems with constraints consideration. On the one hand, besides considering hydrogen-storage, a reinvestigation of crystal structures of the palladium-arsenic system shows, with experimental validations, that binary systems could still currently present new or unknown relevant structures. On the other hand, various binary Mg-based systems were theoretically scrutinized in order to find new interesting alloys for hydrogen storage. Taking the effect of pressure into account reveals a wide range of alternative structures, changing radically the stable compounds of studied binary systems. Similar constraints, induced by Pulsed Laser Deposition, have been applied to binary systems, and results are presented.

Keywords: binary systems, evolutionary algorithm, first principles study, pulsed laser deposition

Procedia PDF Downloads 263
1378 A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process

Authors: Hong-Ming Chen

Abstract:

This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.

Keywords: optimization, interest rate model, jump process, deterministic

Procedia PDF Downloads 158
1377 Use of the Budyko Framework to Estimate the Virtual Water Content in Shijiazhuang Plain, North China

Authors: Enze Zhang

Abstract:

One of the most challenging steps in implementing virtual water content (VWC) analysis of crops is to get properly the total volume of consumptive water use (CWU) and, therefore, the choice of a reliable crop CWU estimation method. In practice, lots of previous researches obtaining CWU of crops follow a classical procedure for calculating crop evapotranspiration which is determined by multiplying reference evapotranspiration by appropriate coefficient, such as crop coefficient and water stress coefficients. However, this manner of calculation requires lots of field experimental data at point scale and more seriously, when current growing conditions differ from the standard conditions, may easily produce deviation between the calculated CWU and the actual CWU. Since evapotranspiration caused by crop planting always plays a vital role in surface water-energy balance in an agricultural region, this study decided to alternatively estimates crop evapotranspiration by Budyko framework. After brief introduce the development process of Budyko framework. We choose a modified Budyko framework under unsteady-state to better evaluated the actual CWU and apply it in an agricultural irrigation area in North China Plain which rely on underground water for irrigation. With the agricultural statistic data, this calculated CWU was further converted into VWC and its subdivision of crops at the annual scale. Results show that all the average values of VWC, VWC_blue and VWC_green show a downward trend with increased agricultural production and improved acreage. By comparison with the previous research, VWC calculated by Budyko framework agree well with part of the previous research and for some other research the value is greater. Our research also suggests that this methodology and findings may be reliable and convenient for investigation of virtual water throughout various agriculture regions of the world.

Keywords: virtual water content, Budyko framework, consumptive water use, crop evapotranspiration

Procedia PDF Downloads 328
1376 Towards Efficient Reasoning about Families of Class Diagrams Using Union Models

Authors: Tejush Badal, Sanaa Alwidian

Abstract:

Class diagrams are useful tools within the Unified Modelling Language (UML) to model and visualize the relationships between, and properties of objects within a system. As a system evolves over time and space (e.g., products), a series of models with several commonalities and variabilities create what is known as a model family. In circumstances where there are several versions of a model, examining each model individually, becomes expensive in terms of computation resources. To avoid performing redundant operations, this paper proposes an approach for representing a family of class diagrams into Union Models to represent model families using a single generic model. The paper aims to analyze and reason about a family of class diagrams using union models as opposed to individual analysis of each member model in the family. The union algorithm provides a holistic view of the model family, where the latter cannot be otherwise obtained from an individual analysis approach, this in turn, enhances the analysis performed in terms of speeding up the time needed to analyze a family of models together as opposed to analyzing individual models, one model at a time.

Keywords: analysis, class diagram, model family, unified modeling language, union model

Procedia PDF Downloads 65