Search results for: decision tree model
19673 A Multi-criteria Decision Method For The Recruitment Of Academic Personnel Based On The Analytical Hierarchy Process And The Delphi Method In A Neutrosophic Environment (Full Text)
Authors: Antonios Paraskevas, Michael Madas
Abstract:
For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes on the multi-criteria nature of the problem and on how decision makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of significant degree of ambiguity and indeterminacy observed in decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model is the first which applies Neutrosophic Delphi Method in the selection of academic staff. As a case study, it was decided to use our method to a real problem of academic personnel selection, having as main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherit ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.Keywords: analytical hierarchy process, delphi method, multi-criteria decision maiking method, neutrosophic set theory, personnel recruitment
Procedia PDF Downloads 20119672 Supply Chain Design: Criteria Considered in Decision Making Process
Authors: Lenka Krsnakova, Petr Jirsak
Abstract:
Prior research on facility location in supply chain is mostly focused on improvement of mathematical models. It is due to the fact that supply chain design has been for the long time the area of operational research that underscores mainly quantitative criteria. Qualitative criteria are still highly neglected within the supply chain design research. Facility location in the supply chain has become multi-criteria decision-making problem rather than single criteria decision due to changes of market conditions. Thus, both qualitative and quantitative criteria have to be included in the decision making process. The aim of this study is to emphasize the importance of qualitative criteria as key parameters of relevant mathematical models. We examine which criteria are taken into consideration when Czech companies decide about their facility location. A literature review on criteria being used in facility location decision making process creates a theoretical background for the study. The data collection was conducted through questionnaire survey. Questionnaire was sent to manufacturing and business companies of all sizes (small, medium and large enterprises) with the representation in the Czech Republic within following sectors: automotive, toys, clothing industry, electronics and pharmaceutical industry. Comparison of which criteria prevail in the current research and which are considered important by companies in the Czech Republic is made. Despite the number of articles focused on supply chain design, only minority of them consider qualitative criteria and rarely process supply chain design as a multi-criteria decision making problem. Preliminary results of the questionnaire survey outlines that companies in the Czech Republic see the qualitative criteria and their impact on facility location decision as crucial. Qualitative criteria as company strategy, quality of working environment or future development expectations are confirmed to be considered by Czech companies. This study confirms that the qualitative criteria can significantly influence whether a particular location could or could not be right place for a logistic facility. The research has two major limitations: researchers who focus on improving of mathematical models mostly do not mention criteria that enter the model. Czech supply chain managers selected important criteria from the group of 18 available criteria and assign them importance weights. It does not necessarily mean that these criteria were taken into consideration when the last facility location was chosen, but how they perceive that today. Since the study confirmed the necessity of future research on how qualitative criteria influence decision making process about facility location, the authors have already started in-depth interviews with participating companies to reveal how the inclusion of qualitative criteria into decision making process about facility location influence the company´s performance.Keywords: criteria influencing facility location, Czech Republic, facility location decision-making, qualitative criteria
Procedia PDF Downloads 32719671 Effectiveness of ISSR Technique in Revealing Genetic Diversity of Phaseolus vulgaris L. Representing Various Parts of the World
Authors: Mohamed El-Shikh
Abstract:
Phaseolus vulgaris L. is the world’s second most important bean after soybeans; used for human food and animal feed. It has generally been linked to reduced risk of cardiovascular disease, diabetes mellitus, obesity, cancer and diseases of digestive tract. The effectiveness of ISSR in achievement of the genetic diversity among 60 common bean accessions; represent various germplasms around the world was investigated. In general, the studied Phaseolus vulgaris accessions were divided into 2 major groups. All of the South-American accessions were separated into the second major group. These accessions may have different genetic features that are distinct from the rest of the accessions clustered in the major group. Asia and Europe accessions (1-20) seem to be more genetically similar (99%) to each other as they clustered in the same sub-group. The American and African varieties showed similarities as well and clustered in the same sub-tree group. In contrast, Asian and American accessions No. 22 and 23 showed a high level of genetic similarities, although these were isolated from different regions. The phylogenetic tree showed that all the Asian accessions (along with Australian No. 59 and 60) were similar except Indian and Yemen accessions No. 9 and 20. Only Netherlands accession No. 3 was different from the rest of European accessions. Morocco accession No. 52 was genetically different from the rest of the African accessions. Canadian accession No. 44 seems to be different from the other North American accessions including Guatemala, Mexico and USA.Keywords: phylogenetic tree, Phaseolus vulgaris, ISSR technique, genetics
Procedia PDF Downloads 40919670 Reading Knowledge Development and Its Phases with Generation Z
Authors: Onur Özdemir, M.Erhan ORHAN
Abstract:
Knowledge Development (KD) is just one of the important phases of Knowledge Management (KM). KD is the phase in which intelligence is used to see the big picture. In order to understand whether information is important or not, we have to use the intelligence cycle that includes four main steps: aiming, collecting data, processing and utilizing. KD also needs these steps. To make a precise decision, the decision maker has to be aware of his subordinates’ ideas. If the decision maker ignores the ideas of his subordinates or participants of the organization, it is not possible for him to get the target. KD is a way of using wisdom to accumulate the puzzle. If the decision maker does not bring together the puzzle pieces, he cannot get the big picture, and this shows its effects on the battlefield. In order to understand the battlefield, the decision maker has to use the intelligence cycle. To convert information to knowledge, KD is the main means for the intelligence cycle. On the other hand, the “Z Generation” born after the millennium are really the game changers. They have different attitudes from their elders. Their understanding of life is different - the definition of freedom and independence have different meanings to them than others. Decision makers have to consider these factors and rethink their decisions accordingly. This article tries to explain the relation between KD and Generation Z. KD is the main method of target managing. But if leaders neglect their people, the world will be seeing much more movements like the Arab Spring and other insurgencies.Keywords: knowledge development, knowledge management, generation Z, intelligence cycle
Procedia PDF Downloads 51819669 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality
Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo
Abstract:
Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.Keywords: linear model, models and modeling, probability, randomness, sample
Procedia PDF Downloads 11919668 The Use of Remotely Sensed Data to Model Habitat Selections of Pileated Woodpeckers (Dryocopus pileatus) in Fragmented Landscapes
Authors: Ruijia Hu, Susanna T.Y. Tong
Abstract:
Light detection and ranging (LiDAR) and four-channel red, green, blue, and near-infrared (RGBI) remote sensed imageries allow an accurate quantification and contiguous measurement of vegetation characteristics and forest structures. This information facilitates the generation of habitat structure variables for forest species distribution modelling. However, applications of remote sensing data, especially the combination of structural and spectral information, to support evidence-based decisions in forest managements and conservation practices at local scale are not widely adopted. In this study, we examined the habitat requirements of pileated woodpecker (Dryocopus pileatus) (PW) in Hamilton County, Ohio, using ecologically relevant forest structural and vegetation characteristics derived from LiDAR and RGBI data. We hypothesized that the habitat of PW is shaped by vegetation characteristics that are directly associated with the availability of food, hiding and nesting resources, the spatial arrangement of habitat patches within home range, as well as proximity to water sources. We used 186 PW presence or absence locations to model their presence and absence in generalized additive model (GAM) at two scales, representing foraging and home range size, respectively. The results confirm PW’s preference for tall and large mature stands with structural complexity, typical of late-successional or old-growth forests. Besides, the crown size of dead trees shows a positive relationship with PW occurrence, therefore indicating the importance of declining living trees or early-stage dead trees within PW home range. These locations are preferred by PW for nest cavity excavation as it attempts to balance the ease of excavation and tree security. In addition, we found that PW can adjust its travel distance to the nearest water resource, suggesting that habitat fragmentation can have certain impacts on PW. Based on our findings, we recommend that forest managers should use different priorities to manage nesting, roosting, and feeding habitats. Particularly, when devising forest management and hazard tree removal plans, one needs to consider retaining enough cavity trees within high-quality PW habitat. By mapping PW habitat suitability for the study area, we highlight the importance of riparian corridor in facilitating PW to adjust to the fragmented urban landscape. Indeed, habitat improvement for PW in the study area could be achieved by conserving riparian corridors and promoting riparian forest succession along major rivers in Hamilton County.Keywords: deadwood detection, generalized additive model, individual tree crown delineation, LiDAR, pileated woodpecker, RGBI aerial imagery, species distribution models
Procedia PDF Downloads 5319667 Normalized Laplacian Eigenvalues of Graphs
Authors: Shaowei Sun
Abstract:
Let G be a graph with vertex set V(G)={v_1,v_2,...,v_n} and edge set E(G). For any vertex v belong to V(G), let d_v denote the degree of v. The normalized Laplacian matrix of the graph G is the matrix where the non-diagonal (i,j)-th entry is -1/(d_id_j) when vertex i is adjacent to vertex j and 0 when they are not adjacent, and the diagonal (i,i)-th entry is the di. In this paper, we discuss some bounds on the largest and the second smallest normalized Laplacian eigenvalue of trees and graphs. As following, we found some new bounds on the second smallest normalized Laplacian eigenvalue of tree T in terms of graph parameters. Moreover, we use Sage to give some conjectures on the second largest and the third smallest normalized eigenvalues of graph.Keywords: graph, normalized Laplacian eigenvalues, normalized Laplacian matrix, tree
Procedia PDF Downloads 32819666 A Reinforcement Learning Based Method for Heating, Ventilation, and Air Conditioning Demand Response Optimization Considering Few-Shot Personalized Thermal Comfort
Authors: Xiaohua Zou, Yongxin Su
Abstract:
The reasonable operation of heating, ventilation, and air conditioning (HVAC) is of great significance in improving the security, stability, and economy of power system operation. However, the uncertainty of the operating environment, thermal comfort varies by users and rapid decision-making pose challenges for HVAC demand response optimization. In this regard, this paper proposes a reinforcement learning-based method for HVAC demand response optimization considering few-shot personalized thermal comfort (PTC). First, an HVAC DR optimization framework based on few-shot PTC model and DRL is designed, in which the output of few-shot PTC model is regarded as the input of DRL. Then, a few-shot PTC model that distinguishes between awake and asleep states is established, which has excellent engineering usability. Next, based on soft actor criticism, an HVAC DR optimization algorithm considering the user’s PTC is designed to deal with uncertainty and make decisions rapidly. Experiment results show that the proposed method can efficiently obtain use’s PTC temperature, reduce energy cost while ensuring user’s PTC, and achieve rapid decision-making under uncertainty.Keywords: HVAC, few-shot personalized thermal comfort, deep reinforcement learning, demand response
Procedia PDF Downloads 8719665 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision
Procedia PDF Downloads 12619664 Free Will and Compatibilism in Decision Theory: A Solution to Newcomb’s Paradox
Authors: Sally Heyeon Hwang
Abstract:
Within decision theory, there are normative principles that dictate how one should act in addition to empirical theories of actual behavior. As a normative guide to one’s actual behavior, evidential or causal decision-theoretic equations allow one to identify outcomes with maximal utility values. The choice that each person makes, however, will, of course, differ according to varying assignments of weight and probability values. Regarding these different choices, it remains a subject of considerable philosophical controversy whether individual subjects have the capacity to exercise free will with respect to the assignment of probabilities, or whether instead the assignment is in some way constrained. A version of this question is given a precise form in Richard Jeffrey’s assumption that free will is necessary for Newcomb’s paradox to count as a decision problem. This paper will argue, against Jeffrey, that decision theory does not require the assumption of libertarian freedom. One of the hallmarks of decision-making is its application across a wide variety of contexts; the implications of a background assumption of free will is similarly varied. One constant across the contexts of decision is that there are always at least two levels of choice for a given agent, depending on the degree of prior constraint. Within the context of Newcomb’s problem, when the predictor is attempting to guess the choice the agent will make, he or she is analyzing the determined aspects of the agent such as past characteristics, experiences, and knowledge. On the other hand, as David Lewis’ backtracking argument concerning the relationship between past and present events brings to light, there are similarly varied ways in which the past can actually be dependent on the present. One implication of this argument is that even in deterministic settings, an agent can have more free will than it may seem. This paper will thus argue against the view that a stable background assumption of free will or determinism in decision theory is necessary, arguing instead for a compatibilist decision theory yielding a novel treatment of Newcomb’s problem.Keywords: decision theory, compatibilism, free will, Newcomb’s problem
Procedia PDF Downloads 32219663 RAPDAC: Role Centric Attribute Based Policy Driven Access Control Model
Authors: Jamil Ahmed
Abstract:
Access control models aim to decide whether a user should be denied or granted access to the user‟s requested activity. Various access control models have been established and proposed. The most prominent of these models include role-based, attribute-based, policy based access control models as well as role-centric attribute based access control model. In this paper, a novel access control model is presented called “Role centric Attribute based Policy Driven Access Control (RAPDAC) model”. RAPDAC incorporates the concept of “policy” in the “role centric attribute based access control model”. It leverages the concept of "policy‟ by precisely combining the evaluation of conditions, attributes, permissions and roles in order to allow authorization access. This approach allows capturing the "access control policy‟ of a real time application in a well defined manner. RAPDAC model allows making access decision at much finer granularity as illustrated by the case study of a real time library information system.Keywords: authorization, access control model, role based access control, attribute based access control
Procedia PDF Downloads 16119662 Integrating Time-Series and High-Spatial Remote Sensing Data Based on Multilevel Decision Fusion
Authors: Xudong Guan, Ainong Li, Gaohuan Liu, Chong Huang, Wei Zhao
Abstract:
Due to the low spatial resolution of MODIS data, the accuracy of small-area plaque extraction with a high degree of landscape fragmentation is greatly limited. To this end, the study combines Landsat data with higher spatial resolution and MODIS data with higher temporal resolution for decision-level fusion. Considering the importance of the land heterogeneity factor in the fusion process, it is superimposed with the weighting factor, which is to linearly weight the Landsat classification result and the MOIDS classification result. Three levels were used to complete the process of data fusion, that is the pixel of MODIS data, the pixel of Landsat data, and objects level that connect between these two levels. The multilevel decision fusion scheme was tested in two sites of the lower Mekong basin. We put forth a comparison test, and it was proved that the classification accuracy was improved compared with the single data source classification results in terms of the overall accuracy. The method was also compared with the two-level combination results and a weighted sum decision rule-based approach. The decision fusion scheme is extensible to other multi-resolution data decision fusion applications.Keywords: image classification, decision fusion, multi-temporal, remote sensing
Procedia PDF Downloads 12519661 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry
Authors: Dhanuj M. Gandikota
Abstract:
Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry
Procedia PDF Downloads 10419660 The Effect of Career Decision Self Efficacy on Coping with Career Indecision among Young Adults
Authors: Yuliya Lipshits-Braziler
Abstract:
For many young adults, career decision making is a difficult and complex process that may lead to indecision. Indecision is frequently associated with great psychological distress and low levels of well-being. One important resource for dealing with indecision is career decision self-efficacy (CDSE), which refers to people’s beliefs about their ability to successfully accomplish certain tasks involved in career choice. Drawing from Social Cognitive Theory, it has been hypothesized that CDSE correlates with (a) people’s likelihood to engage in or avoid career decision making tasks, (b) the amount of effort put into the decision making process, (c) the people’s persistence in decision making efforts when faced with difficulties, and (d) the eventual success in arriving at career decisions. Based on these assumptions, the present study examines the associations between the CDSE and 14 strategies for coping with career indecision among young adults. Using the structural equation modeling (SEM), the results showed that CDSE is positively associated with the use of productive coping strategies, such as information-seeking, problem-solving, positive thinking, and self-regulation. In addition, CDSE was negatively associated with nonproductive coping strategies, such as avoidance, isolation, ruminative thinking, and blaming others. Contrary to our expectations, CDSE was not significantly correlated with instrumental help-seeking, while it was negatively correlated with emotional help-seeking. The results of this study can be used to facilitate the development of interventions aiming to reinforce young adults’ career decision making self-efficacy, which may provide them with a basis for overcoming career indecision more effectively.Keywords: career decision self-efficacy, career indecision, coping strategies, career counseling
Procedia PDF Downloads 25819659 Using Genetic Algorithm to Organize Sustainable Urban Landscape in Historical Part of City
Authors: Shahab Mirzaean Mahabadi, Elham Ebrahimi
Abstract:
The urban development process in the historical urban context has predominately witnessed two main approaches: the first is the Preservation and conservation of the urban fabric and its value, and the second approach is urban renewal and redevelopment. The latter is generally supported by political and economic aspirations. These two approaches conflict evidently. The authors go through the history of urban planning in order to review the historical development of the mentioned approaches. In this article, various values which are inherent in the historical fabric of a city are illustrated by emphasizing on cultural identity and activity. In the following, it is tried to find an optimized plan which maximizes economic development and minimizes change in historical-cultural sites simultaneously. In the proposed model, regarding the decision maker’s intention, and the variety of functions, the selected zone is divided into a number of components. For each component, different alternatives can be assigned, namely, renovation, refurbishment, destruction, and change in function. The decision Variable in this model is to choose an alternative for each component. A set of decisions made upon all components results in a plan. A plan developed in this way can be evaluated based on the decision maker’s point of view. That is, interactions between selected alternatives can make a foundation for the assessment of urban context to design a historical-cultural landscape. A genetic algorithm (GA) approach is used to search for optimal future land use within the historical-culture landscape for a sustainable high-growth city.Keywords: urban sustainability, green city, regeneration, genetic algorithm
Procedia PDF Downloads 6919658 Cytotoxic Effect of Neem Seed Extract (Azadirachta indica) in Comparison with Artificial Insecticide Novastar on Haemocytes (THC and DHC) of Musca domestica
Authors: Muhammad Zaheer Awan, Adnan Qadir, Zeeshan Anjum
Abstract:
Housefly, Musca domestica Linnaeus is ubiquitous and hazardous for Homo sapiens and livestock in sundry venerations. Musca domestica cart 100 different pathogens, such as typhoid, salmonella, bacillary dysentery, tuberculosis, anthrax and parasitic worms. The flies in rural areas usually carry more pathogens. Houseflies feed on liquid or semi-liquid substances besides solid materials which are softened by saliva. Neem botanically known as Azadirachta indica belongs to the family Meliaceae and is an indigenous tree to Pakistan. The neem tree is also one such tree which has been revered by the Pakistanis and Kashmiris for its medicinal properties. Present study showed neem seed extract has potentially toxic ability that affect Total Haemocyte Count (THC) and Differential Haemocytes Count (DHC) in insect’s blood cells, of the housefly. A significant variation in haemolymph density was observed just after application, 30 minutes and 60 minutes post treatment in term of THC and DHC in comparison with novastar. The study strappingly acclaim use of neem seed extract as insecticide as compare to artificial insecticides.Keywords: neem, Azadirachta indica, Musca domestica, differential haemocyte count (DHC), total haemocytes count (DHC), novastar
Procedia PDF Downloads 20519657 Performance Comparison of Situation-Aware Models for Activating Robot Vacuum Cleaner in a Smart Home
Authors: Seongcheol Kwon, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.Keywords: situation-awareness, smart home, IoT, machine learning, classifier
Procedia PDF Downloads 42219656 Moderation Role of Effects of Forms of Upward versus Downward Counterfactual Reasoning on Gambling Cognition and Decision of Nigerians
Authors: Larry O. Awo, George N. Duru
Abstract:
There is growing public and mental health concerns over the availability of gambling platforms and shops in Nigeria and the high level of youth involvement in gambling. Early theorizing maintained that gambling involvement driven by the quest for resource gains. However, evidences show that the economic model of gambling tend to explain the involvement of the gambling business owners (sport lottery operators: SLOs) as most gamblers lose more than they win. This loss, according to the law of effect, ought to discourage decisions to gamble. However, the quest to recover loses has often initiated and prolonged gambling sessions. Therefore, the need to investigate mental contemplations (such as counterfactual reasoning (upward versus downward) of what “would, should, or could” have been, and feeling of the illusion of control; IOC) over gambling outcome as risk or protective factors in gambling decisions became pertinent. The present study sought to understand the differential contributions and conditional effects of upward versus downward counterfactual reasoning as pathways through which the association between IOC and gambling decision of Nigerian youths (N = 120, mean age = 18.05, SD = 3.81) could be explained. The study adopted a randomized group design, and data were obtained by means of stimulus material (the Gambling Episode; GE) and self-report measures of IOC and Gambling Decision. One-way analysis of variance (ANOVA) result showed that participants in the upward counterfactual reasoning group (M = 22.08) differed from their colleagues in the downward counterfactual reasoning group (M = 17.33) on the decision to gamble, and this difference was significant [F(1,112) = 23, P < .01]. HAYES PROCESS macro moderation analysis results showed that 1) IOC and upward counterfactual reasoning were positively associated with the decision to gamble (B = 14.21, t = 6.10, p < .01 and B = 7.22, t = 2.07, p < .01), 3) upward counterfactual reasoning did not moderate the association between IOC and gambling decision (p > .05), and 4) downward counterfactual reasoning negatively moderated the association between IOC and gambling decision (B = 07, t = 2.18, p < .05) such that the association was strong at a low level of downward counterfactual, but wane at high levels of downward counterfactual reasoning. The implication of these findings are that IOC and upward counterfactual reasoning were risk factors and promote gambling behavior, while downward counterfactual reasoning protects individuals from gambling activities. Thus, it is concluded that downward counterfactual reasoning strategies should be included in gambling therapy and treatment packages as it could diminish feelings of both IOC and negative feelings of missed positive outcomes and the urge to gamble.Keywords: counterfactual reasoning, gambling cognition, gambling decision, nigeria, youths
Procedia PDF Downloads 10919655 A Multi-Objective Gate Assignment Model Based on Airport Terminal Configuration
Authors: Seyedmirsajad Mokhtarimousavi, Danial Talebi, Hamidreza Asgari
Abstract:
Assigning aircrafts’ activities to appropriate gates is one the most challenging issues in airport authorities’ multiple criteria decision making. The potential financial loss due to imbalances of demand and supply in congested airports, higher occupation rates of gates, and the existing restrictions to expand facilities provide further evidence for the need for an optimal supply allocation. Passengers walking distance, towing movements, extra fuel consumption (as a result of awaiting longer to taxi when taxi conflicts happen at the apron area), etc. are the major traditional components involved in GAP models. In particular, the total cost associated with gate assignment problem highly depends on the airport terminal layout. The study herein presents a well-elaborated literature review on the topic focusing on major concerns, applicable variables and objectives, as well as proposing a three-objective mathematical model for the gate assignment problem. The model has been tested under different concourse layouts in order to check its performance in different scenarios. Results revealed that terminal layout pattern is a significant parameter in airport and that the proposed model is capable of dealing with key constraints and objectives, which supports its practical usability for future decision making tools. Potential solution techniques were also suggested in this study for future works.Keywords: airport management, terminal layout, gate assignment problem, mathematical modeling
Procedia PDF Downloads 23119654 Comparative Analysis of Classification Methods in Determining Non-Active Student Characteristics in Indonesia Open University
Authors: Dewi Juliah Ratnaningsih, Imas Sukaesih Sitanggang
Abstract:
Classification is one of data mining techniques that aims to discover a model from training data that distinguishes records into the appropriate category or class. Data mining classification methods can be applied in education, for example, to determine the classification of non-active students in Indonesia Open University. This paper presents a comparison of three methods of classification: Naïve Bayes, Bagging, and C.45. The criteria used to evaluate the performance of three methods of classification are stratified cross-validation, confusion matrix, the value of the area under the ROC Curve (AUC), Recall, Precision, and F-measure. The data used for this paper are from the non-active Indonesia Open University students in registration period of 2004.1 to 2012.2. Target analysis requires that non-active students were divided into 3 groups: C1, C2, and C3. Data analyzed are as many as 4173 students. Results of the study show: (1) Bagging method gave a high degree of classification accuracy than Naïve Bayes and C.45, (2) the Bagging classification accuracy rate is 82.99 %, while the Naïve Bayes and C.45 are 80.04 % and 82.74 % respectively, (3) the result of Bagging classification tree method has a large number of nodes, so it is quite difficult in decision making, (4) classification of non-active Indonesia Open University student characteristics uses algorithms C.45, (5) based on the algorithm C.45, there are 5 interesting rules which can describe the characteristics of non-active Indonesia Open University students.Keywords: comparative analysis, data mining, clasiffication, Bagging, Naïve Bayes, C.45, non-active students, Indonesia Open University
Procedia PDF Downloads 31619653 Investigation of Extreme Gradient Boosting Model Prediction of Soil Strain-Shear Modulus
Authors: Ehsan Mehryaar, Reza Bushehri
Abstract:
One of the principal parameters defining the clay soil dynamic response is the strain-shear modulus relation. Predicting the strain and, subsequently, shear modulus reduction of the soil is essential for performance analysis of structures exposed to earthquake and dynamic loadings. Many soil properties affect soil’s dynamic behavior. In order to capture those effects, in this study, a database containing 1193 data points consists of maximum shear modulus, strain, moisture content, initial void ratio, plastic limit, liquid limit, initial confining pressure resulting from dynamic laboratory testing of 21 clays is collected for predicting the shear modulus vs. strain curve of soil. A model based on an extreme gradient boosting technique is proposed. A tree-structured parzan estimator hyper-parameter tuning algorithm is utilized simultaneously to find the best hyper-parameters for the model. The performance of the model is compared to the existing empirical equations using the coefficient of correlation and root mean square error.Keywords: XGBoost, hyper-parameter tuning, soil shear modulus, dynamic response
Procedia PDF Downloads 20319652 Sustainable Wood Harvesting from Juniperus procera Trees Managed under a Participatory Forest Management Scheme in Ethiopia
Authors: Mindaye Teshome, Evaldo Muñoz Braz, Carlos M. M. Eleto Torres, Patricia Mattos
Abstract:
Sustainable forest management planning requires up-to-date information on the structure, standing volume, biomass, and growth rate of trees from a given forest. This kind of information is lacking in many forests in Ethiopia. The objective of this study was to quantify the population structure, diameter growth rate, and standing volume of wood from Juniperus procera trees in the Chilimo forest. A total of 163 sample plots were set up in the forest to collect the relevant vegetation data. Growth ring measurements were conducted on stem disc samples collected from 12 J. procera trees. Diameter and height measurements were recorded from a total of 1399 individual trees with dbh ≥ 2 cm. The growth rate, maximum current and mean annual increments, minimum logging diameter, and cutting cycle were estimated, and alternative cutting cycles were established. Using these data, the harvestable volume of wood was projected by alternating four minimum logging diameters and five cutting cycles following the stand table projection method. The results show that J. procera trees have an average density of 183 stems ha⁻¹, a total basal area of 12.1 m² ha⁻¹, and a standing volume of 98.9 m³ ha⁻¹. The mean annual diameter growth ranges between 0.50 and 0.65 cm year⁻¹ with an overall mean of 0.59 cm year⁻¹. The population of J. procera tree followed a reverse J-shape diameter distribution pattern. The maximum current annual increment in volume (CAI) occurred at around 49 years when trees reached 30 cm in diameter. Trees showed the maximum mean annual increment in volume (MAI) around 91 years, with a diameter size of 50 cm. The simulation analysis revealed that 40 cm MLD and a 15-year cutting cycle are the best minimum logging diameter and cutting cycle. This combination showed the largest harvestable volume of wood potential, volume increments, and a 35% recovery of the initially harvested volume. It is concluded that the forest is well stocked and has a large amount of harvestable volume of wood from J. procera trees. This will enable the country to partly meet the national wood demand through domestic wood production. The use of the current population structure and diameter growth data from tree ring analysis enables the exact prediction of the harvestable volume of wood. The developed model supplied an idea about the productivity of the J. procera tree population and enables policymakers to develop specific management criteria for wood harvesting.Keywords: logging, growth model, cutting cycle, minimum logging diameter
Procedia PDF Downloads 8919651 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty
Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos
Abstract:
Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning
Procedia PDF Downloads 21119650 Ecotourism Development in Ikogosi Warmspring, Nigeria: Implications on Its Floristic Composition and Structure
Authors: Oluwatobi Emmanuel Olaniyi, Babafemi George Ogunjemite
Abstract:
The high rate of infrastructural development in Ikogosi warm spring towards harnessing her great ecotourism potentials calls for a serious concern, as more forest areas are been opened up for public access and the landscape is modified. On this note, we investigated the implication of ecotourism development on the floristic composition and forest structure in Ikogosi. The study aimed at identifying the past and present status of infrastructural development, assessing and comparing the floristic composition and structure of the built- up/ recreational areas and undisturbed forested areas, to infer on the impact of ecotourism development on the study site. We conducted stakeholder interview and field observation to identify the past and present status of infrastructural development respectively. A total of ten quadrants were employed in the vegetation assessment to characterize the woody tree species composition, diameter at breast height and height, to obtain mean indices characterizing each part of the site. These indices were compared using T – test analysis. A total of 49 different woody tree species distributed in 21 families were identified in the built-in/ recreational areas while 67 different woody tree species belonging to 25 families were recorded in the undeveloped forested areas. Although, the latter has a higher mean diameter at breast height of woody trees, it was not significantly different from the former (T-test = -0.74, p = 0.46). On the contrary, the built-up area had a higher mean trees height than the undeveloped areas, but the difference was not statistically significant (T-test= 1.04, p = 0.30). Despite these, the slight reduction in richness and diversity of the woody tree species in the built- up/ recreational areas implies mitigating the negative effects of infrastructural development on the warm spring's vegetation.Keywords: ecosystem services, forest structure, vegetation assessment, warm-spring
Procedia PDF Downloads 51119649 Personality as a Determinant of Career Decision-Making Difficulties in a Higher Educational Institution in Ghana
Authors: Gladys Maame Akua Setordzie
Abstract:
Decision on one’s future career is said to have both beneficial and detrimental effects on one’s mental health, social and economic standing later in life, making it an important developmental problem for young people. In this light, the study’s overarching goal was to assess how different personality traits serve as a determinant of career decision-making difficulties experienced by university students in Ghana. Specifically, for the purpose of shaping the future of individualized career counselling support, the study investigated whether the “Big Five” personality traits influenced the difficulties students at the University of Ghana encounter while making career decisions. Cross-sectional survey design using a stratified random sampling technique, sampled 494 undergraduate students from the University of Ghana, who completed the Big Five Questionnaire and the Career Decision-making Difficulties Questionnaire. Hierarchical multiple regression analyses indicated that neuroticism, consciousness, and openness, accounted for a significant proportion of the variance in career decision-making difficulties. This study provides empirical evidence to support the idea that neuroticism is not necessarily a negative emotion when it comes to career decisionmaking, as has been suggested in previous studies, but rather it allows students to perform better in career decision-making. These results suggests that personality traits play a significant role in the career decision-making process of students of the University of Ghana. Therefore, a better understanding of how different personal and interpersonal factors impact career indecision in students could help career counsellors develop more focused vocational and career guidance interventions.Keywords: career decision-making difficulties, dysfunctional career beliefs, personality traits, young people
Procedia PDF Downloads 10319648 Infodemic Detection on Social Media with a Multi-Dimensional Deep Learning Framework
Authors: Raymond Xu, Cindy Jingru Wang
Abstract:
Social media has become a globally connected and influencing platform. Social media data, such as tweets, can help predict the spread of pandemics and provide individuals and healthcare providers early warnings. Public psychological reactions and opinions can be efficiently monitored by AI models on the progression of dominant topics on Twitter. However, statistics show that as the coronavirus spreads, so does an infodemic of misinformation due to pandemic-related factors such as unemployment and lockdowns. Social media algorithms are often biased toward outrage by promoting content that people have an emotional reaction to and are likely to engage with. This can influence users’ attitudes and cause confusion. Therefore, social media is a double-edged sword. Combating fake news and biased content has become one of the essential tasks. This research analyzes the variety of methods used for fake news detection covering random forest, logistic regression, support vector machines, decision tree, naive Bayes, BoW, TF-IDF, LDA, CNN, RNN, LSTM, DeepFake, and hierarchical attention network. The performance of each method is analyzed. Based on these models’ achievements and limitations, a multi-dimensional AI framework is proposed to achieve higher accuracy in infodemic detection, especially pandemic-related news. The model is trained on contextual content, images, and news metadata.Keywords: artificial intelligence, fake news detection, infodemic detection, image recognition, sentiment analysis
Procedia PDF Downloads 25919647 Supply Chain Decarbonisation – A Cost-Based Decision Support Model in Slow Steaming Maritime Operations
Authors: Eugene Y. C. Wong, Henry Y. K. Lau, Mardjuki Raman
Abstract:
CO2 emissions from maritime transport operations represent a substantial part of the total greenhouse gas emission. Vessels are designed with better energy efficiency. Minimizing CO2 emission in maritime operations plays an important role in supply chain decarbonisation. This paper reviews the initiatives on slow steaming operations towards the reduction of carbon emission. It investigates the relationship and impact among slow steaming cost reduction, carbon emission reduction, and shipment delay. A scenario-based cost-driven decision support model is developed to facilitate the selection of the optimal slow steaming options, considering the cost on bunker fuel consumption, available speed, carbon emission, and shipment delay. The incorporation of the social cost of cargo is reviewed and suggested. Additional measures on the effect of vessels sizes, routing, and type of fuels towards decarbonisation are discussed.Keywords: slow steaming, carbon emission, maritime logistics, sustainability, green supply chain
Procedia PDF Downloads 45819646 Effect of Peg-6000-induced Drought Stress on the Germination of Moringa Stenopetala Seeds
Authors: Khater Nadia, Garah Kenza
Abstract:
Moringa stenopetala is a rapidly growing, unappreciated tree regarded as the "miracle tree" for its food, feed, and medicinal benefits. It appears to be a versatile and promising species for use under changing conditions. To evaluate the effect of water stress on germination seeds of M. stenopetala, three different concentrations PEG- 6000 (4, 8, and 12 per cent) along with a control in a factorial experiment based on a completely randomized design with five replications. The results revealed that water potential significantly reduced germination rate (82.5%) and average germination time. Germination speed in T3 by 93%, kinetics germination in T2 (39), germination index in T2 (102) and germination vigor index in T2 (91.25) were increased in the osmotic potential of PEG solution. By following these steps, we can improve the chances of successful germination of M. stenopetala seeds under water stress conditionsKeywords: moringa stenopetala, PEG, water stress, rate
Procedia PDF Downloads 1319645 Variation of Carbon Isotope Ratio (δ13C) and Leaf-Productivity Traits in Aquilaria Species (Thymelaeceae)
Authors: Arlene López-Sampson, Tony Page, Betsy Jackes
Abstract:
Aquilaria genus produces a highly valuable fragrant oleoresin known as agarwood. Agarwood forms in a few trees in the wild as a response to injure or pathogen attack. The resin is used in perfume and incense industry and medicine. Cultivation of Aquilaria species as a sustainable source of the resin is now a common strategy. Physiological traits are frequently used as a proxy of crop and tree productivity. Aquilaria species growing in Queensland, Australia were studied to investigate relationship between leaf-productivity traits with tree growth. Specifically, 28 trees, representing 12 plus trees and 16 trees from yield plots, were selected to conduct carbon isotope analysis (δ13C) and monitor six leaf attributes. Trees were grouped on four diametric classes (diameter at 150 mm above ground level) ensuring the variability in growth of the whole population was sampled. Model averaging technique based on the Akaike’s information criterion (AIC) was computed to identify whether leaf traits could assist in diameter prediction. Carbon isotope values were correlated with height classes and leaf traits to determine any relationship. In average four leaves per shoot were recorded. Approximately one new leaf per week is produced by a shoot. Rate of leaf expansion was estimated in 1.45 mm day-1. There were no statistical differences between diametric classes and leaf expansion rate and number of new leaves per week (p > 0.05). Range of δ13C values in leaves of Aquilaria species was from -25.5 ‰ to -31 ‰ with an average of -28.4 ‰ (± 1.5 ‰). Only 39% of the variability in height can be explained by δ13C in leaf. Leaf δ13C and nitrogen content values were positively correlated. This relationship implies that leaves with higher photosynthetic capacities also had lower intercellular carbon dioxide concentrations (ci/ca) and less depleted values of 13C. Most of the predictor variables have a weak correlation with diameter (D). However, analysis of the 95% confidence of best-ranked regression models indicated that the predictors that could likely explain growth in Aquilaria species are petiole length (PeLen), values of δ13C (true13C) and δ15N (true15N), leaf area (LA), specific leaf area (SLA) and number of new leaf produced per week (NL.week). The model constructed with PeLen, true13C, true15N, LA, SLA and NL.week could explain 45% (R2 0.4573) of the variability in D. The leaf traits studied gave a better understanding of the leaf attributes that could assist in the selection of high-productivity trees in Aquilaria.Keywords: 13C, petiole length, specific leaf area, tree growth
Procedia PDF Downloads 51219644 Factors Influencing University Student's Acceptance of New Technology
Authors: Fatma Khadra
Abstract:
The objective of this research is to identify the acceptance of new technology in a sample of 150 Participants from Qatar University. Based on the Technology Acceptance Model (TAM), we used the Davis’s scale (1989) which contains two item scales for Perceived Usefulness and Perceived Ease of Use. The TAM represents an important theoretical contribution toward understanding how users come to accept and use technology. This model suggests that when people are presented with a new technology, a number of variables influence their decision about how and when they will use it. The results showed that participants accept more technology because flexibility, clarity, enhancing the experience, enjoying, facility, and useful. Also, results showed that younger participants accept more technology than others.Keywords: new technology, perceived usefulness, perceived ease of use, technology acceptance model
Procedia PDF Downloads 322