Search results for: and the aggregate trapezoidal hesitant fuzzy decision matrix will be built. The case is considered when information on the attribute weights is completely unknown. The attribute weights are identified based on the De Luca and Termini information entropy concept
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 51852

Search results for: and the aggregate trapezoidal hesitant fuzzy decision matrix will be built. The case is considered when information on the attribute weights is completely unknown. The attribute weights are identified based on the De Luca and Termini information entropy concept

51282 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 196
51281 Method of Synthesis of Controlled Generators Balanced a Strictly Avalanche Criteria-Functions

Authors: Ali Khwaldeh, Nimer Adwan

Abstract:

In this paper, a method for constructing a controlled balanced Boolean function satisfying the criterion of a Strictly Avalanche Criteria (SAC) effect is proposed. The proposed method is based on the use of three orthogonal nonlinear components which is unlike the high-order SAC functions. So, the generator synthesized by the proposed method has separate sets of control and information inputs. The proposed method proves its simplicity and the implementation ability. The proposed method allows synthesizing a SAC function generator with fixed control and information inputs. This ensures greater efficiency of the built-in oscillator compared to high-order SAC functions that can be used as a generator. Accordingly, the method is completely formalized and implemented as a software product.

Keywords: boolean function, controlled balanced boolean function, strictly avalanche criteria, orthogonal nonlinear

Procedia PDF Downloads 135
51280 Decision Support System for Optimal Placement of Wind Turbines in Electric Distribution Grid

Authors: Ahmed Ouammi

Abstract:

This paper presents an integrated decision framework to support decision makers in the selection and optimal allocation of wind power plants in the electric grid. The developed approach intends to maximize the benefice related to the project investment during the planning period. The proposed decision model considers the main cost components, meteorological data, environmental impacts, operation and regulation constraints, and territorial information. The decision framework is expressed as a stochastic constrained optimization problem with the aim to identify the suitable locations and related optimal wind turbine technology considering the operational constraints and maximizing the benefice. The developed decision support system is applied to a case study to demonstrate and validate its performance.

Keywords: decision support systems, electric power grid, optimization, wind energy

Procedia PDF Downloads 134
51279 Development of Enhanced Data Encryption Standard

Authors: Benjamin Okike

Abstract:

There is a need to hide information along the superhighway. Today, information relating to the survival of individuals, organizations, or government agencies is transmitted from one point to another. Adversaries are always on the watch along the superhighway to intercept any information that would enable them to inflict psychological ‘injuries’ to their victims. But with information encryption, this can be prevented completely or at worst reduced to the barest minimum. There is no doubt that so many encryption techniques have been proposed, and some of them are already being implemented. However, adversaries always discover loopholes on them to perpetuate their evil plans. In this work, we propose the enhanced data encryption standard (EDES) that would deploy randomly generated numbers as an encryption method. Each time encryption is to be carried out, a new set of random numbers would be generated, thereby making it almost impossible for cryptanalysts to decrypt any information encrypted with this newly proposed method.

Keywords: encryption, enhanced data encryption, encryption techniques, information security

Procedia PDF Downloads 128
51278 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models

Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh

Abstract:

In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.

Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals

Procedia PDF Downloads 279
51277 A Comparative Analysis of Evacuation Behavior in Case of Cyclone Sidr, Typhoon Yolanda and the Great East Japan Earthquake

Authors: Swarnali Chakma, Akihiko Hokugo

Abstract:

Research on three case studies reviewed here explains many aspects and complications of evacuation behavior during an emergency period. The scenario and phenomenon of the disaster were different, but the similarities are that after receiving the warning peoples does not take it seriously. Many individuals evacuated after taking some kind of action, for example; return to home, searching for family members, prepared valuable things etc. Based on a review of the literature, the data identified a number of factors that help explain evacuation behavior during the disaster. In the case of Japan, cultural inhibitors impact people’s behavior; for example, following the traffic rules, some people lost their time to skip because of the slow-moving car makes overcrowded traffic and some of them were washed away by the tsunami. In terms of Bangladeshi culture, women did not want to evacuate without men because staying men and women who do not know each other under the same roof together is not regular practice or comfortable. From these three case studies, it is observed that early warning plays an important role in cyclones, typhoons and earthquakes. A high level of trust from residents in the warning system is important to real evacuation. It is necessary to raise awareness of disaster and provide information on the vulnerability to cyclones, typhoons and earthquakes hazards at community levels. The local level may help decision makers and other stakeholders to make a better decision regarding an effective disaster management.

Keywords: disaster management, emergency period, evacuation, shelter, typhoon

Procedia PDF Downloads 135
51276 Modeling Driving Distraction Considering Psychological-Physical Constraints

Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang

Abstract:

Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.

Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints

Procedia PDF Downloads 62
51275 A Multicriteria Framework for Assessing Energy Audit Software for Low-Income Households

Authors: Charles Amoo, Joshua New, Bill Eckman

Abstract:

Buildings in the United States account for a significant proportion of energy consumption and greenhouse gas (GHG) emissions, and this trend is expected to continue as well as rise in the near future. Low-income households, in particular, bear a disproportionate burden of high building energy consumption and spending due to high energy costs. Energy efficiency improvements need to reach an average of 4% per year in this decade in order to meet global net zero emissions target by 2050, but less than 1 % of U.S. buildings are improved each year. The government has recognized the importance of technology in addressing this issue, and energy efficiency programs have been developed to tackle the problem. The Weatherization Assistance Program (WAP), the largest residential whole-house energy efficiency program in the U.S., is specifically designed to reduce energy costs for low-income households. Under the WAP, energy auditors must follow specific audit procedures and use Department of Energy (DOE) approved energy audit tools or software. This article proposes an expanded framework of factors that should be considered in energy audit software that is approved for use in energy efficiency programs, particularly for low-income households. The framework includes more than 50 factors organized under 14 assessment criteria and can be used to qualitatively and quantitatively score different energy audit software to determine their suitability for specific energy efficiency programs. While the tool can be useful for developers to build new tools and improve existing software, as well as for energy efficiency program administrators to approve or certify tools for use, there are limitations to the model, such as the lack of flexibility that allows continuous scoring to accommodate variability and subjectivity. These limitations can be addressed by using aggregate scores of each criterion as weights that could be combined with value function and direct rating scores in a multicriteria decision analysis for a more flexible scoring.

Keywords: buildings, energy efficiency, energy audit, software

Procedia PDF Downloads 57
51274 Ecosystem Model for Environmental Applications

Authors: Cristina Schreiner, Romeo Ciobanu, Marius Pislaru

Abstract:

This paper aims to build a system based on fuzzy models that can be implemented in the assessment of ecological systems, to determine appropriate methods of action for reducing adverse effects on environmental and implicit the population. The model proposed provides new perspective for environmental assessment, and it can be used as a practical instrument for decision-making.

Keywords: ecosystem model, environmental security, fuzzy logic, sustainability of habitable regions

Procedia PDF Downloads 398
51273 Building Information Modelling in Eastern Province Municipality of KSA

Authors: Banan Aljumaiah

Abstract:

In recent years, the construction industry has leveraged the information revolution, which makes it possible to view the entire construction process of new buildings before they are built with the advent of Building Information Modelling (BIM). Although BIM is an integration of the building model with the data and documents about the building, however, its implementation is limited to individual buildings missing the large picture of the city infrastructure. This limitation of BIM led to the birth of City Information Modelling. Three years ago, Eastern Province Municipality (EPM) in Saudi Arabia mandated that all major projects be delivered with collaborative 3D BIM. After three years of implementation, EPM started to implement City Information Modelling (CIM) as a part of the Smart City Plan to link infrastructure and public services and modelling how people move around and interact with the city. This paper demonstrates a local case study of BIM implementation in EPM and its future as a part of project management automation; the paper also highlights the ambitious plan of EPM to transform CIM towards building smart cities.

Keywords: BIM, BIM to CIM

Procedia PDF Downloads 114
51272 Study of ANFIS and ARIMA Model for Weather Forecasting

Authors: Bandreddy Anand Babu, Srinivasa Rao Mandadi, C. Pradeep Reddy, N. Ramesh Babu

Abstract:

In this paper quickly illustrate the correlation investigation of Auto-Regressive Integrated Moving and Average (ARIMA) and daptive Network Based Fuzzy Inference System (ANFIS) models done by climate estimating. The climate determining is taken from University of Waterloo. The information is taken as Relative Humidity, Ambient Air Temperature, Barometric Pressure and Wind Direction utilized within this paper. The paper is carried out by analyzing the exhibitions are seen by demonstrating of ARIMA and ANIFIS model like with Sum of average of errors. Versatile Network Based Fuzzy Inference System (ANFIS) demonstrating is carried out by Mat lab programming and Auto-Regressive Integrated Moving and Average (ARIMA) displaying is produced by utilizing XLSTAT programming. ANFIS is carried out in Fuzzy Logic Toolbox in Mat Lab programming.

Keywords: ARIMA, ANFIS, fuzzy surmising tool stash, weather forecasting, MATLAB

Procedia PDF Downloads 392
51271 Barriers of the Development and Implementation of Health Information Systems in Iran

Authors: Abbas Sheikhtaheri, Nasim Hashemi

Abstract:

Health information systems have great benefits for clinical and managerial processes of health care organizations. However, identifying and removing constraints and barriers of implementing and using health information systems before any implementation is essential. Physicians are one of the main users of health information systems, therefore, identifying the causes of their resistance and concerns about the barriers of the implementation of these systems is very important. So the purpose of this study was to determine the barriers of the development and implementation of health information systems in terms of the Iranian physicians’ perspectives. In this study conducted in 8 selected hospitals affiliated to Tehran and Iran Universities of Medical Sciences, Tehran, Iran in 2014, physicians (GPs, residents, interns, specialists) in these hospitals were surveyed. In order to collect data, a research made questionnaire was used (Cronbach’s α = 0.95). The instrument included 25 about organizational (9), personal (4), moral and legal (3) and technical barriers (9). Participants were asked to answer the questions using 5 point scale Likert (completely disagree=1 to completely agree=5). By using a simple random sampling method, 200 physicians (from 600) were invited to study that eventually 163 questionnaires were returned. We used mean score and t-test and ANOVA to analyze the data using SPSS software version 17. 52.1% of respondents were female. The mean age was 30.18 ± 7.29. The work experience years for most of them were between 1 to 5 years (80.4 percent). The most important barriers were organizational ones (3.4 ± 0.89), followed by ethical (3.18 ± 0.98), technical (3.06 ± 0.8) and personal (3.04 ± 1.2). Lack of easy access to a fast Internet (3.67±1.91) and the lack of exchanging information (3.61±1.2) were the most important technical barriers. Among organizational barriers, the lack of efficient planning for the development and implementation systems (3.56±1.32) and was the most important ones. Lack of awareness and knowledge of health care providers about the health information systems features (3.33±1.28) and the lack of physician participation in planning phase (3.27±1.2) as well as concerns regarding the security and confidentiality of health information (3.15 ± 1.31) were the most important personal and ethical barriers, respectively. Women (P = 0.02) and those with less experience (P = 0.002) were more concerned about personal barriers. GPs also were more concerned about technical barriers (P = 0.02). According to the study, technical and ethics barriers were considered as the most important barriers however, lack of awareness in target population is also considered as one of the main barriers. Ignoring issues such as personal and ethical barriers, even if the necessary infrastructure and technical requirements were provided, may result in failure. Therefore, along with the creating infrastructure and resolving organizational barriers, special attention to education and awareness of physicians and providing solution for ethics concerns are necessary.

Keywords: barriers, development health information systems, implementation, physicians

Procedia PDF Downloads 324
51270 Case-Based Reasoning: A Hybrid Classification Model Improved with an Expert's Knowledge for High-Dimensional Problems

Authors: Bruno Trstenjak, Dzenana Donko

Abstract:

Data mining and classification of objects is the process of data analysis, using various machine learning techniques, which is used today in various fields of research. This paper presents a concept of hybrid classification model improved with the expert knowledge. The hybrid model in its algorithm has integrated several machine learning techniques (Information Gain, K-means, and Case-Based Reasoning) and the expert’s knowledge into one. The knowledge of experts is used to determine the importance of features. The paper presents the model algorithm and the results of the case study in which the emphasis was put on achieving the maximum classification accuracy without reducing the number of features.

Keywords: case based reasoning, classification, expert's knowledge, hybrid model

Procedia PDF Downloads 351
51269 A Literature Review on the Barriers in Incorporating Universal Design in Public Transportation Projects: Southeast Asian Countries

Authors: Oscar Conrad Pili De Jesus

Abstract:

In consonance with the UN Convention on Rights for People with Disabilities, countries are mandated to provide a barrier-free environment through adherence to universal design and full participation of persons with disabilities (PWDs) in planning and implementation, but there is little action in incorporating universal design in the public environment. Travelling freely and independently is paramount to the needs of the PWDs to participate in daily activities ahead of them, and it contributes to the advancement of their inclusion in society, in which universal design is a catalyst to provide seamless access and mobility. This study aims to determine the barriers to incorporating the concept of universal design in transportation projects in Southeast Asian countries. Based on a literature review and using the accessible journey chain as a framework, barriers are identified and categorized in the components of public transport within the context of utilization of the transport mode, the built environment within the transport infrastructure, and the first and last miles of travel. Some findings in the study which constitute solutions to creating a barrier-free environment were identified as information to guide the future research agenda in efficiently incorporating universal design in transportation projects in Southeast Asian countries. The study reflected that the focus of most literature is on the built environment, noting that there is a need for future studies to investigate universal design in the context of the public transport component in the active journey chain.

Keywords: public transportation, barriers, universal design, persons with disabilities, accessible journey chain

Procedia PDF Downloads 111
51268 The Microstructural and Mechanical Characterization of Organo-Clay-Modified Bitumen, Calcareous Aggregate, and Organo-Clay Blends

Authors: A. Gürses, T. B. Barın, Ç. Doğar

Abstract:

Bitumen has been widely used as the binder of aggregate in road pavement due to its good viscoelastic properties, as a viscous organic mixture with various chemical compositions. Bitumen is a liquid at high temperature and it becomes brittle at low temperatures, and this temperature-sensitivity can cause the rutting and cracking of the pavement and limit its application. Therefore, the properties of existing asphalt materials need to be enhanced. The pavement with polymer modified bitumen exhibits greater resistance to rutting and thermal cracking, decreased fatigue damage, as well as stripping and temperature susceptibility; however, they are expensive and their applications have disadvantages. Bituminous mixtures are composed of very irregular aggregates bound together with hydrocarbon-based asphalt, with a low volume fraction of voids dispersed within the matrix. Montmorillonite (MMT) is a layered silicate with low cost and abundance, which consists of layers of tetrahedral silicate and octahedral hydroxide sheets. Recently, the layered silicates have been widely used for the modification of polymers, as well as in many different fields. However, there are not too much studies related with the preparation of the modified asphalt with MMT, currently. In this study, organo-clay-modified bitumen, and calcareous aggregate and organo-clay blends were prepared by hot blending method with OMMT, which has been synthesized using a cationic surfactant (Cetyltrymethylammonium bromide, CTAB) and long chain hydrocarbon, and MMT. When the exchangeable cations in the interlayer region of pristine MMT were exchanged with hydrocarbon attached surfactant ions, the MMT becomes organophilic and more compatible with bitumen. The effects of the super hydrophobic OMMT onto the micro structural and mechanic properties (Marshall Stability and volumetric parameters) of the prepared blends were investigated. Stability and volumetric parameters of the blends prepared were measured using Marshall Test. Also, in order to investigate the morphological and micro structural properties of the organo-clay-modified bitumen and calcareous aggregate and organo-clay blends, their SEM and HRTEM images were taken. It was observed that the stability and volumetric parameters of the prepared mixtures improved significantly compared to the conventional hot mixes and even the stone matrix mixture. A micro structural analysis based on SEM images indicates that the organo-clay platelets dispersed in the bitumen have a dominant role in the increase of effectiveness of bitumen - aggregate interactions.

Keywords: hot mix asphalt, stone matrix asphalt, organo clay, Marshall test, calcareous aggregate, modified bitumen

Procedia PDF Downloads 216
51267 Multi-Stream Graph Attention Network for Recommendation with Knowledge Graph

Authors: Zhifei Hu, Feng Xia

Abstract:

In recent years, Graph neural network has been widely used in knowledge graph recommendation. The existing recommendation methods based on graph neural network extract information from knowledge graph through entity and relation, which may not be efficient in the way of information extraction. In order to better propose useful entity information for the current recommendation task in the knowledge graph, we propose an end-to-end Neural network Model based on multi-stream graph attentional Mechanism (MSGAT), which can effectively integrate the knowledge graph into the recommendation system by evaluating the importance of entities from both users and items. Specifically, we use the attention mechanism from the user's perspective to distil the domain nodes information of the predicted item in the knowledge graph, to enhance the user's information on items, and generate the feature representation of the predicted item. Due to user history, click items can reflect the user's interest distribution, we propose a multi-stream attention mechanism, based on the user's preference for entities and relationships, and the similarity between items to be predicted and entities, aggregate user history click item's neighborhood entity information in the knowledge graph and generate the user's feature representation. We evaluate our model on three real recommendation datasets: Movielens-1M (ML-1M), LFM-1B 2015 (LFM-1B), and Amazon-Book (AZ-book). Experimental results show that compared with the most advanced models, our proposed model can better capture the entity information in the knowledge graph, which proves the validity and accuracy of the model.

Keywords: graph attention network, knowledge graph, recommendation, information propagation

Procedia PDF Downloads 92
51266 Food and Feeding Habit of Clarias anguillaris in Tagwai Reservoir, Minna, Niger State, Nigeria

Authors: B. U. Ibrahim, A. Okafor

Abstract:

Sixty-two (62) samples of Clarias anguillaris were collected from Tagwai Reservoir and used for the study. 29 male and 33 female samples were obtained for the study. Body measurement indicated that different sizes were collected for the study. Males, females and combined sexes had standard length and total length means of 26.56±4.99 and 31.13±6.43, 27.17±5.21 and 30.62±5.43, 26.88±5.08 and 30.86±5.88 cm, respectively. The weights of males, females and combined sexes have mean weights of 241.10±96.27, 225.75±78.66 and 232.93±86.95 gm, respectively. Eight items; fish, insects, plant materials, sand grains, crustaceans, algae, detritus and unidentified items were eaten as food by Clarias anguilarias in Tagwai Reservoir. Frequency of occurrence and numerical methods used in stomach contents analysis indicated that fish was the highest, followed by insect, while the lowest was the algae. Frequency of stomach fullness of Clarias anguillaris showed low percentage of empty stomachs or stomachs without food (21.00%) and high percentage of stomachs with food (79.00%), which showed high abundance of food and high feeding intensity during the period of study. Classification of fish based on feeding habits showed that Clarias anguillaris in this study is an omnivore because it consumed both plant and animal materials.

Keywords: stomach content, feeding habit, Clarias anguillaris, Tagwai Reservoir

Procedia PDF Downloads 573
51265 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization

Authors: Yihao Kuang, Bowen Ding

Abstract:

With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graph and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improve strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain better and more efficient inference effect by introducing PPO into knowledge inference technology.

Keywords: reinforcement learning, PPO, knowledge inference, supervised learning

Procedia PDF Downloads 42
51264 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory

Authors: Liqin Zhang, Liang Yan

Abstract:

This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.

Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization

Procedia PDF Downloads 106
51263 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder

Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa

Abstract:

Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.

Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami

Procedia PDF Downloads 468
51262 An Experimental Study on Some Conventional and Hybrid Models of Fuzzy Clustering

Authors: Jeugert Kujtila, Kristi Hoxhalli, Ramazan Dalipi, Erjon Cota, Ardit Murati, Erind Bedalli

Abstract:

Clustering is a versatile instrument in the analysis of collections of data providing insights of the underlying structures of the dataset and enhancing the modeling capabilities. The fuzzy approach to the clustering problem increases the flexibility involving the concept of partial memberships (some value in the continuous interval [0, 1]) of the instances in the clusters. Several fuzzy clustering algorithms have been devised like FCM, Gustafson-Kessel, Gath-Geva, kernel-based FCM, PCM etc. Each of these algorithms has its own advantages and drawbacks, so none of these algorithms would be able to perform superiorly in all datasets. In this paper we will experimentally compare FCM, GK, GG algorithm and a hybrid two-stage fuzzy clustering model combining the FCM and Gath-Geva algorithms. Firstly we will theoretically dis-cuss the advantages and drawbacks for each of these algorithms and we will describe the hybrid clustering model exploiting the advantages and diminishing the drawbacks of each algorithm. Secondly we will experimentally compare the accuracy of the hybrid model by applying it on several benchmark and synthetic datasets.

Keywords: fuzzy clustering, fuzzy c-means algorithm (FCM), Gustafson-Kessel algorithm, hybrid clustering model

Procedia PDF Downloads 490
51261 A Hybrid Expert System for Generating Stock Trading Signals

Authors: Hosein Hamisheh Bahar, Mohammad Hossein Fazel Zarandi, Akbar Esfahanipour

Abstract:

In this paper, a hybrid expert system is developed by using fuzzy genetic network programming with reinforcement learning (GNP-RL). In this system, the frame-based structure of the system uses the trading rules extracted by GNP. These rules are extracted by using technical indices of the stock prices in the training time period. For developing this system, we applied fuzzy node transition and decision making in both processing and judgment nodes of GNP-RL. Consequently, using these method not only did increase the accuracy of node transition and decision making in GNP's nodes, but also extended the GNP's binary signals to ternary trading signals. In the other words, in our proposed Fuzzy GNP-RL model, a No Trade signal is added to conventional Buy or Sell signals. Finally, the obtained rules are used in a frame-based system implemented in Kappa-PC software. This developed trading system has been used to generate trading signals for ten companies listed in Tehran Stock Exchange (TSE). The simulation results in the testing time period shows that the developed system has more favorable performance in comparison with the Buy and Hold strategy.

Keywords: fuzzy genetic network programming, hybrid expert system, technical trading signal, Tehran stock exchange

Procedia PDF Downloads 312
51260 Response of Broiler Chickens Fed Pelleted or Non-Pelleted Diets, Containing Graded Levels of Raw Full-Fat Soybean

Authors: G. Berhane, F. Kebede

Abstract:

A feeding trial was conducted to enhance the utilization of locally produced full-fat soybean by the broiler industry. The study had three phases such as starter (1-14d), grower (15–28d), and finisher (29–49d) phases. A completely randomized design (CRD) was used in the starter phase with three treatments (commercial soybean meal (SBM) was replaced by raw full-fat soybean (RFSB) at 0, 10, or 20%), and each was replicated eight times. A total of 408 unsexed one-day-old Cobb-500 broiler chicks were randomly allocated to replicates. A 2 x 3 factorial arrangement was used in both second (grower) and third (finisher) phase trials, which had six experimental diets. These six treatments were formed by dividing the original three diets (containing 0, 10, or 20% of RFSB into two and then by pelleting anyone from each respective group and leaving the other as mash. Every treatment had four replications and 17 birds in each. Chemical compositions of feed ingredients were analyzed, and data on the initial body weight of chicks, feed offered, feed leftover, body weight (BW) of chickens, and mortality were collected. At the end of the experiment, two birds (one male and one female) per replicate were randomly selected and humanly slaughtered. Weights of dressed, eviscerated, cut parts of the carcass and visceral organs were weighed and recorded. Results indicated that feed intake (FI), body weight gain (BWG), BW, and feed conversion ratio (FCR) of broilers were not significantly affected (P=0.05) by supplementation of a leveled RFSB on diets at starter, grower, and finisher phases. The FI at the finisher stage was also significantly (P=0.05) influenced by the feed forms. However, weights of dressed, eviscerated, cut parts of the carcass and visceral organs were not significantly (P=0.05) affected by both RFSB supplementation, up to 20%, and feed forms. It is concluded that commercial SBM can be replaced by locally produced RFSB up to 20% without pelleting the diets.

Keywords: broilers, carcass characteristics, raw full-fat soybean, weight gain

Procedia PDF Downloads 111
51259 The Future of Truth and Lies in the Context of Technology-Mediated Environments

Authors: James P. Takona

Abstract:

Most of the global population has never lived through a pandemic, and thus there is so much that remains unknown about students' capacity for resiliency under such environments and circumstances and what a timeline for full recovery will look like. The session will guide participants to focus on misinformation and disinformation in the context of recent crisis events, with specific reference to how information flows across. Particular focus will be given to the flow of information in mediated technology and platforms with particular reference to K-12 and teacher preparation program environments. The paper will draw on theories and responses from the sociology of disaster, the social psychology of rumoring, and published studies on disinformation and misinformation. Applications will be identified and applied in the context of online information-sharing during crisis events. The session will offer the application of the Center for Contagious Diseases' Crisis and Emergency Risk Communication model to understand the themes and evolution of misinformation and disinformation. The paper will invite session participants to suggest and interact with raised challenges on the impact of dis-and misinformation.

Keywords: Sociology of disaster, misinformation, dis-information, Social Psychology of rumors

Procedia PDF Downloads 67
51258 The Impact of the Knowledge-Sharing Factors on Improving Decision Making at Sultan Qaboos University Libraries

Authors: Aseela Alhinaai, Suliman Abdullah, Adil Albusaidi

Abstract:

Knowledge has been considered an important asset in private and public organizations. It is utilized in the libraries sector to run different operations of technical services and administrative works. As a result, the International Federation of Library Association (IFLA) established a department “Knowledge Management” in December 2003 to provide a deep understanding of the KM concept for professionals. These are implemented through different programs, workshops, and activities. This study aims to identify the impact of the knowledge-sharing factors (technology, collaboration, management support) to improve decision-making at Sultan Qaboos University Libraries. This study conducted a quantitative method using a questionnaire instrument to measure the impact of technology, collaboration, and management support on knowledge sharing that lead to improved decision-making. The study population is the (SQU) libraries (Main Library, Medical Library, College of Economic and political science library, and Art Library). The results showed that management support, collaboration, and technology use have a positive impact on the knowledge-sharing process, and knowledge-sharing positively affects the decision making process.

Keywords: knowledge sharing, decision-making, information technology, management support, corroboration, Sultan Qaboos University

Procedia PDF Downloads 50
51257 Decision Making in Medicine and Treatment Strategies

Authors: Kamran Yazdanbakhsh, Somayeh Mahmoudi

Abstract:

Three reasons make good use of the decision theory in medicine: 1. Increased medical knowledge and their complexity makes it difficult treatment information effectively without resorting to sophisticated analytical methods, especially when it comes to detecting errors and identify opportunities for treatment from databases of large size. 2. There is a wide geographic variability of medical practice. In a context where medical costs are, at least in part, by the patient, these changes raise doubts about the relevance of the choices made by physicians. These differences are generally attributed to differences in estimates of probabilities of success of treatment involved, and differing assessments of the results on success or failure. Without explicit criteria for decision, it is difficult to identify precisely the sources of these variations in treatment. 3. Beyond the principle of informed consent, patients need to be involved in decision-making. For this, the decision process should be explained and broken down. A decision problem is to select the best option among a set of choices. The problem is what is meant by "best option ", or know what criteria guide the choice. The purpose of decision theory is to answer this question. The systematic use of decision models allows us to better understand the differences in medical practices, and facilitates the search for consensus. About this, there are three types of situations: situations certain, risky situations, and uncertain situations: 1. In certain situations, the consequence of each decision are certain. 2. In risky situations, every decision can have several consequences, the probability of each of these consequences is known. 3. In uncertain situations, each decision can have several consequences, the probability is not known. Our aim in this article is to show how decision theory can usefully be mobilized to meet the needs of physicians. The decision theory can make decisions more transparent: first, by clarifying the data systematically considered the problem and secondly by asking a few basic principles should guide the choice. Once the problem and clarified the decision theory provides operational tools to represent the available information and determine patient preferences, and thus assist the patient and doctor in their choices.

Keywords: decision making, medicine, treatment strategies, patient

Procedia PDF Downloads 564
51256 On Tarski’s Type Theorems for L-Fuzzy Isotone and L-Fuzzy Relatively Isotone Maps on L-Complete Propelattices

Authors: František Včelař, Zuzana Pátíková

Abstract:

Recently a new type of very general relational structures, the so called (L-)complete propelattices, was introduced. These significantly generalize complete lattices and completely lattice L-ordered sets, because they do not assume the technically very strong property of transitivity. For these structures also the main part of the original Tarski’s fixed point theorem holds for (L-fuzzy) isotone maps, i.e., the part which concerns the existence of fixed points and the structure of their set. In this paper, fundamental properties of (L-)complete propelattices are recalled and the so called L-fuzzy relatively isotone maps are introduced. For these maps it is proved that they also have fixed points in L-complete propelattices, even if their set does not have to be of an awaited analogous structure of a complete propelattice.

Keywords: fixed point, L-complete propelattice, L-fuzzy (relatively) isotone map, residuated lattice, transitivity

Procedia PDF Downloads 258
51255 Ulnar Parametacarpal Flap for Coverage of Fifth Finger Defects: Propeller Flap Concept

Authors: Ahmed M. Gad, Ahmed S. Hweidi

Abstract:

Background: Defects of the little finger and adjacent areas are not uncommon. It could be a traumatic, post-burn, or after contracture release. Different options could be used for resurfacing these defect, including skin grafts, local or regional flaps. Ulnar para-metacarpal flap described by Bakhach in 1995 based on the distal division of the dorsal branch of the ulnar artery considered a good option for that. In this work, we applied the concept of propeller flap for better mobilization and in-setting of the ulnar para-metacarpal flap. Methods: The study included 15 cases with 4 females and 11 male patients. 10 of the patients had severe post-burn contractures of little finger, and 5 had post-traumatic little finger defects. Contractures were released and resulting soft tissue defects were reconstructed with propeller ulnar para-metacarpal artery flap. The flap based on two main perforators communicating with the palmar system, it was raised based on one of them depending on the extent of the defect and rotated 180 degrees after judicious dissection of the perforator. Results: 13 flaps survived completely, one of the cases developed partial skin loss, which healed by dressing, another flap was completely lost and covered later by a full-thickness skin graft. Conclusion: Ulnar para-metacarpal flap is a reliable option to resurface the little finger as well as adjacent areas. The application of the propeller flap concept based on whether the proximal or distal communicating branch makes the rotation and in-setting of the flap easier.

Keywords: little finger defects, propeller flap, regional hand defects, ulnar parametacarpal flap

Procedia PDF Downloads 163
51254 Determination of Verapamil Hydrochloride in the Tablet and Injection Solution by the Verapamil-Sensitive Electrode and Possibilities of Application in Pharmaceutical Analysis

Authors: Faisal A. Salih, V. V. Egorov

Abstract:

Verapamil is a drug used in medicine for arrhythmia, angina, and hypertension as a calcium channel blocker. In this study, a Verapamil-selective electrode was prepared, and the concentrations of the components in the membrane were as follows: PVC (32.8 wt %), O-NPhOE (66.6 wt %), and KTPClPB (0.6 wt % or approximately 0.01 M). The inner solution containing verapamil hydrochloride 1 x 10⁻³ M was introduced, and the electrodes were conditioned overnight in 1 x 10⁻³ M verapamil hydrochloride solution in 1 x 10⁻³ M orthophosphoric acid. These studies have demonstrated that O-NPhOE and KTPClPB are the best plasticizers and ion exchangers, while both direct potentiometry and potentiometric titration methods can be used for the determination of verapamil hydrochloride in tablets and injection solutions. Normalized weights of verapamil per tablet (80.4±0.2, 80.7±0.2, 81.0±0.4 mg) were determined by direct potentiometry and potentiometric titration, respectively. Weights of verapamil per average tablet weight determined by the methods of direct potentiometry and potentiometric titration were" 80.4±0.2, 80.7±0.2 mg determined for the same set of tablets, respectively. The masses of verapamil in solutions for injection, determined by direct potentiometry for two ampoules from one set, were (5.00±0.015, 5.004±0.006) mg. In all cases, good reproducibility and excellent correspondence with the declared quantities were observed.

Keywords: verapamil, potentiometry, ion-selective electrode, lipophilic physiologically active amines

Procedia PDF Downloads 70
51253 Seismic Data Analysis of Intensity, Orientation and Distribution of Fractures in Basement Rocks for Reservoir Characterization

Authors: Mohit Kumar

Abstract:

Natural fractures are classified in two broad categories of joints and faults on the basis of shear movement in the deposited strata. Natural fracture always has high structural relationship with extensional or non-extensional tectonics and sometimes the result is seen in the form of micro cracks. Geological evidences suggest that both large and small-scale fractures help in to analyze the seismic anisotropy which essentially contribute into characterization of petro physical properties behavior associated with directional migration of fluid. We generally question why basement study is much needed as historically it is being treated as non-productive and geoscientist had no interest in exploration of these basement rocks. Basement rock goes under high pressure and temperature, and seems to be highly fractured because of the tectonic stresses that are applied to the formation along with the other geological factors such as depositional trend, internal stress of the rock body, rock rheology, pore fluid and capillary pressure. Sometimes carbonate rocks also plays the role of basement and igneous body e.g basalt deposited over the carbonate rocks and fluid migrate from carbonate to igneous rock due to buoyancy force and adequate permeability generated by fracturing. So in order to analyze the complete petroleum system, FMC (Fluid Migration Characterization) is necessary through fractured media including fracture intensity, orientation and distribution both in basement rock and county rock. Thus good understanding of fractures can lead to project the correct wellbore trajectory or path which passes through potential permeable zone generated through intensified P-T and tectonic stress condition. This paper deals with the analysis of these fracture property such as intensity, orientation and distribution in basement rock as large scale fracture can be interpreted on seismic section, however, small scale fractures show ambiguity in interpretation because fracture in basement rock lies below the seismic wavelength and hence shows erroneous result in identification. Seismic attribute technique also helps us to delineate the seismic fracture and subtle changes in fracture zone and these can be inferred from azimuthal anisotropy in velocity and amplitude and spectral decomposition. Seismic azimuthal anisotropy derives fracture intensity and orientation from compressional wave and converted wave data and based on variation of amplitude or velocity with azimuth. Still detailed analysis of fractured basement required full isotropic and anisotropic analysis of fracture matrix and surrounding rock matrix in order to characterize the spatial variability of basement fracture which support the migration of fluid from basement to overlying rock.

Keywords: basement rock, natural fracture, reservoir characterization, seismic attribute

Procedia PDF Downloads 175