Search results for: multidimensional hierarchical graph neuron
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1371

Search results for: multidimensional hierarchical graph neuron

1071 Application of the Tripartite Model to the Link between Non-Suicidal Self-Injury and Suicidal Risk

Authors: Ashley Wei-Ting Wang, Wen-Yau Hsu

Abstract:

Objectives: The current study applies and expands the Tripartite Model to elaborate the link between non-suicidal self-injury (NSSI) and suicidal behavior. We propose a structural model of NSSI and suicidal risk, in which negative affect (NA) predicts both anxiety and depression, positive affect (PA) predicts depression only, anxiety is linked to NSSI, and depression is linked to suicidal risk. Method: Four hundreds and eighty seven undergraduates participated. Data were collected by administering self-report questionnaires. We performed hierarchical regression and structural equation modeling to test the proposed structural model. Results: The results largely support the proposed structural model, with one exception: anxiety was strongly associated with NSSI and to a lesser extent with suicidal risk. Conclusions: We conclude that the co-occurrence of NSSI and suicidal risk is due to NA and anxiety, and suicidal risk can be differentiated by depression. Further theoretical and practical implications are discussed.

Keywords: non-suicidal self-injury, suicidal risk, anxiety, depression, the tripartite model, hierarchical relationship

Procedia PDF Downloads 444
1070 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items

Authors: Wen-Chung Wang, Xue-Lan Qiu

Abstract:

Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.

Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison

Procedia PDF Downloads 229
1069 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 83
1068 Investigating Physician-Induced Demand among Mental Patients in East Azerbaijan, Iran: A Multilevel Approach of Hierarchical Linear Modeling

Authors: Hossein Panahi, Firouz Fallahi, Sima Nasibparast

Abstract:

Background & Aim: Unnecessary growth in health expenditures of developing countries in recent decades, and also the importance of physicians’ behavior in health market, have made the theory of physician-induced demand (PID) as one of the most important issues in health economics. Therefore, the main objective of this study is to investigate the hypothesis of induced demand among mental patients who receive services from either psychologists or psychiatrists in East Azerbaijan province. Methods: Using data from questionnaires in 2020 and employing the theoretical model of Jaegher and Jegers (2000) and hierarchical linear modeling (HLM), this study examines the PID hypothesis of selected psychologists and psychiatrists. The sample size of the study, after removing the questionnaires with missing data, is 45 psychologists and 203 people of their patients, as well as 30 psychiatrists and 160 people of their patients. Results: The results show that, although psychiatrists are ‘profit-oriented physicians’, there is no evidence of inducing unnecessary demand by them (PID), and the difference between the behavior of employers and employee doctors is due to differences in practice style. However, with regard to psychologists, the results indicate that they are ‘profit-oriented’, and there is a PID effect in this sector. Conclusion: According to the results, it is suggested that in order to reduce competition and eliminate the PID effect, the admission of students in the field of psychology should be reduced, patient information on mental illness should be increased, and government monitoring and control over the national health system must be increased.

Keywords: physician-induced demand, national health system, hierarchical linear modeling methods, multilevel modela

Procedia PDF Downloads 116
1067 Hierarchical Control Structure to Control the Power Distribution System Components in Building Systems

Authors: Hamed Sarbazy, Zohre Gholipour Haftkhani, Ali Safari, Pejman Hosseiniun

Abstract:

Scientific and industrial progress in the past two decades has resulted in energy distribution systems based on power electronics, as an enabling technology in various industries and building management systems can be considered. Grading and standardization module power electronics systems and its use in a distributed control system, a strategy for overcoming the limitations of using this system. The purpose of this paper is to investigate strategies for scheduling and control structure of standard modules is a power electronic systems. This paper introduces the classical control methods and disadvantages of these methods will be discussed, The hierarchical control as a mechanism for distributed control structure of the classification module explains. The different levels of control and communication between these levels are fully introduced. Also continue to standardize software distribution system control structure is discussed. Finally, as an example, the control structure will be presented in a DC distribution system.

Keywords: application management, hardware management, power electronics, building blocks

Procedia PDF Downloads 492
1066 Research on Dynamic Practical Byzantine Fault Tolerance Consensus Algorithm

Authors: Cao Xiaopeng, Shi Linkai

Abstract:

The practical Byzantine fault-tolerant algorithm does not add nodes dynamically. It is limited in practical application. In order to add nodes dynamically, Dynamic Practical Byzantine Fault Tolerance Algorithm (DPBFT) was proposed. Firstly, a new node sends request information to other nodes in the network. The nodes in the network decide their identities and requests. Then the nodes in the network reverse connect to the new node and send block information of the current network. The new node updates information. Finally, the new node participates in the next round of consensus, changes the view and selects the master node. This paper abstracts the decision of nodes into the undirected connected graph. The final consistency of the graph is used to prove that the proposed algorithm can adapt to the network dynamically. Compared with the PBFT algorithm, DPBFT has better fault tolerance and lower network bandwidth.

Keywords: practical byzantine, fault tolerance, blockchain, consensus algorithm, consistency analysis

Procedia PDF Downloads 107
1065 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings

Authors: Jude K. Safo

Abstract:

Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.

Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics

Procedia PDF Downloads 47
1064 Handling Complexity of a Complex System Design: Paradigm, Formalism and Transformations

Authors: Hycham Aboutaleb, Bruno Monsuez

Abstract:

Current systems' complexity has reached a degree that requires addressing conception and design issues while taking into account environmental, operational, social, legal, and financial aspects. Therefore, one of the main challenges is the way complex systems are specified and designed. The exponentially growing effort, cost, and time investment of complex systems in modeling phase emphasize the need for a paradigm, a framework, and an environment to handle the system model complexity. For that, it is necessary to understand the expectations of the human user of the model and his limits. This paper presents a generic framework for designing complex systems, highlights the requirements a system model needs to fulfill to meet human user expectations, and suggests a graph-based formalism for modeling complex systems. Finally, a set of transformations are defined to handle the model complexity.

Keywords: higraph-based, formalism, system engineering paradigm, modeling requirements, graph-based transformations

Procedia PDF Downloads 374
1063 Ranking the Elements of Relationship Market Orientation Banks (Case Study: Saderat Bank of Iran)

Authors: Sahar Jami, Iman Valizadeh

Abstract:

Today banks not only should seek for new customers but also should consider previous maintenance and retention and establish a stable relationship with them. In this term, relationship-manner marketing seeks to make, maintain, and promote the relationship between customers and other stakeholders in benefits to fulfill all involved parties. This fact is possible just by interactive transaction and promises fulfillment. According to the importance of relationship-manner marketing in banks, making context to make relationship-manner marketing has high importance. Therefore, the present study aims at exploring intention condition to relationship-manner marketing in Iran Province Iran Limited bank, and also prioritizing its variables using hierarchical analysis (AHP). There is questionnaire designed in this research to paired comparison of relationship-manner marketing elements. After distributing this questionnaire among statistical society members who are 20 of Iran Limited bank experts, data analysis has been done by Expert Choice software.

Keywords: relationship marketing, relationship market orientation, Saderat Bank of Iran, hierarchical analysis

Procedia PDF Downloads 387
1062 An Exploratory Sequential Design: A Mixed Methods Model for the Statistics Learning Assessment with a Bayesian Network Representation

Authors: Zhidong Zhang

Abstract:

This study established a mixed method model in assessing statistics learning with Bayesian network models. There are three variants in exploratory sequential designs. There are three linked steps in one of the designs: qualitative data collection and analysis, quantitative measure, instrument, intervention, and quantitative data collection analysis. The study used a scoring model of analysis of variance (ANOVA) as a content domain. The research study is to examine students’ learning in both semantic and performance aspects at fine grain level. The ANOVA score model, y = α+ βx1 + γx1+ ε, as a cognitive task to collect data during the student learning process. When the learning processes were decomposed into multiple steps in both semantic and performance aspects, a hierarchical Bayesian network was established. This is a theory-driven process. The hierarchical structure was gained based on qualitative cognitive analysis. The data from students’ ANOVA score model learning was used to give evidence to the hierarchical Bayesian network model from the evidential variables. Finally, the assessment results of students’ ANOVA score model learning were reported. Briefly, this was a mixed method research design applied to statistics learning assessment. The mixed methods designs expanded more possibilities for researchers to establish advanced quantitative models initially with a theory-driven qualitative mode.

Keywords: exploratory sequential design, ANOVA score model, Bayesian network model, mixed methods research design, cognitive analysis

Procedia PDF Downloads 137
1061 Integrating High-Performance Transport Modes into Transport Networks: A Multidimensional Impact Analysis

Authors: Sarah Pfoser, Lisa-Maria Putz, Thomas Berger

Abstract:

In the EU, the transport sector accounts for roughly one fourth of the total greenhouse gas emissions. In fact, the transport sector is one of the main contributors of greenhouse gas emissions. Climate protection targets aim to reduce the negative effects of greenhouse gas emissions (e.g. climate change, global warming) worldwide. Achieving a modal shift to foster environmentally friendly modes of transport such as rail and inland waterways is an important strategy to fulfill the climate protection targets. The present paper goes beyond these conventional transport modes and reflects upon currently emerging high-performance transport modes that yield the potential of complementing future transport systems in an efficient way. It will be defined which properties describe high-performance transport modes, which types of technology are included and what is their potential to contribute to a sustainable future transport network. The first step of this paper is to compile state-of-the-art information about high-performance transport modes to find out which technologies are currently emerging. A multidimensional impact analysis will be conducted afterwards to evaluate which of the technologies is most promising. This analysis will be performed from a spatial, social, economic and environmental perspective. Frequently used instruments such as cost-benefit analysis and SWOT analysis will be applied for the multidimensional assessment. The estimations for the analysis will be derived based on desktop research and discussions in an interdisciplinary team of researchers. For the purpose of this work, high-performance transport modes are characterized as transport modes with very fast and very high throughput connections that could act as efficient extension to the existing transport network. The recently proposed hyperloop system represents a potential high-performance transport mode which might be an innovative supplement for the current transport networks. The idea of hyperloops is that persons and freight are shipped in a tube at more than airline speed. Another innovative technology consists in drones for freight transport. Amazon already tests drones for their parcel shipments, they aim for delivery times of 30 minutes. Drones can, therefore, be considered as high-performance transport modes as well. The Trans-European Transport Networks program (TEN-T) addresses the expansion of transport grids in Europe and also includes high speed rail connections to better connect important European cities. These services should increase competitiveness of rail and are intended to replace aviation, which is known to be a polluting transport mode. In this sense, the integration of high-performance transport modes as described above facilitates the objectives of the TEN-T program. The results of the multidimensional impact analysis will reveal potential future effects of the integration of high-performance modes into transport networks. Building on that, a recommendation on the following (research) steps can be given which are necessary to ensure the most efficient implementation and integration processes.

Keywords: drones, future transport networks, high performance transport modes, hyperloops, impact analysis

Procedia PDF Downloads 305
1060 Hierarchical Piecewise Linear Representation of Time Series Data

Authors: Vineetha Bettaiah, Heggere S. Ranganath

Abstract:

This paper presents a Hierarchical Piecewise Linear Approximation (HPLA) for the representation of time series data in which the time series is treated as a curve in the time-amplitude image space. The curve is partitioned into segments by choosing perceptually important points as break points. Each segment between adjacent break points is recursively partitioned into two segments at the best point or midpoint until the error between the approximating line and the original curve becomes less than a pre-specified threshold. The HPLA representation achieves dimensionality reduction while preserving prominent local features and general shape of time series. The representation permits course-fine processing at different levels of details, allows flexible definition of similarity based on mathematical measures or general time series shape, and supports time series data mining operations including query by content, clustering and classification based on whole or subsequence similarity.

Keywords: data mining, dimensionality reduction, piecewise linear representation, time series representation

Procedia PDF Downloads 250
1059 Application of Multidimensional Model of Evaluating Organisational Performance in Moroccan Sport Clubs

Authors: Zineb Jibraili, Said Ouhadi, Jorge Arana

Abstract:

Introduction: Organizational performance is recognized by some theorists as one-dimensional concept, and by others as multidimensional. This concept, which is already difficult to apply in traditional companies, is even harder to identify, to measure and to manage when voluntary organizations are concerned, essentially because of the complexity of that form of organizations such as sport clubs who are characterized by the multiple goals and multiple constituencies. Indeed, the new culture of professionalization and modernization around organizational performance emerges new pressures from the state, sponsors, members and other stakeholders which have required these sport organizations to become more performance oriented, or to build their capacity in order to better manage their organizational performance. The evaluation of performance can be made by evaluating the input (e.g. available resources), throughput (e.g. processing of the input) and output (e.g. goals achieved) of the organization. In non-profit organizations (NPOs), questions of performance have become increasingly important in the world of practice. To our knowledge, most of studies used the same methods to evaluate the performance in NPSOs, but no recent study has proposed a club-specific model. Based on a review of the studies that specifically addressed the organizational performance (and effectiveness) of NPSOs at operational level, the present paper aims to provide a multidimensional framework in order to understand, analyse and measure organizational performance of sport clubs. This paper combines all dimensions founded in literature and chooses the most suited of them to our model that we will develop in Moroccan sport clubs case. Method: We propose to implicate our unified model of evaluating organizational performance that takes into account all the limitations found in the literature. On a sample of Moroccan sport clubs ‘Football, Basketball, Handball and Volleyball’, for this purpose we use a qualitative study. The sample of our study comprises data from sport clubs (football, basketball, handball, volleyball) participating on the first division of the professional football league over the period from 2011 to 2016. Each football club had to meet some specific criteria in order to be included in the sample: 1. Each club must have full financial data published in their annual financial statements, audited by an independent chartered accountant. 2. Each club must have sufficient data. Regarding their sport and financial performance. 3. Each club must have participated at least once in the 1st division of the professional football league. Result: The study showed that the dimensions that constitute the model exist in the field with some small modifications. The correlations between the different dimensions are positive. Discussion: The aim of this study is to test the unified model emerged from earlier and narrower approaches for Moroccan case. Using the input-throughput-output model for the sketch of efficiency, it was possible to identify and define five dimensions of organizational effectiveness applied to this field of study.

Keywords: organisational performance, model multidimensional, evaluation organizational performance, sport clubs

Procedia PDF Downloads 288
1058 Network Connectivity Knowledge Graph Using Dwave Quantum Hybrid Solvers

Authors: Nivedha Rajaram

Abstract:

Hybrid Quantum solvers have been given prime focus in recent days by computation problem-solving domain industrial applications. D’Wave Quantum Computers are one such paragon of systems built using quantum annealing mechanism. Discrete Quadratic Models is a hybrid quantum computing model class supplied by D’Wave Ocean SDK - a real-time software platform for hybrid quantum solvers. These hybrid quantum computing modellers can be employed to solve classic problems. One such problem that we consider in this paper is finding a network connectivity knowledge hub in a huge network of systems. Using this quantum solver, we try to find out the prime system hub, which acts as a supreme connection point for the set of connected computers in a large network. This paper establishes an innovative problem approach to generate a connectivity system hub plot for a set of systems using DWave ocean SDK hybrid quantum solvers.

Keywords: quantum computing, hybrid quantum solver, DWave annealing, network knowledge graph

Procedia PDF Downloads 92
1057 Applying Hybrid Graph Drawing and Clustering Methods on Stock Investment Analysis

Authors: Mouataz Zreika, Maria Estela Varua

Abstract:

Stock investment decisions are often made based on current events of the global economy and the analysis of historical data. Conversely, visual representation could assist investors’ gain deeper understanding and better insight on stock market trends more efficiently. The trend analysis is based on long-term data collection. The study adopts a hybrid method that combines the Clustering algorithm and Force-directed algorithm to overcome the scalability problem when visualizing large data. This method exemplifies the potential relationships between each stock, as well as determining the degree of strength and connectivity, which will provide investors another understanding of the stock relationship for reference. Information derived from visualization will also help them make an informed decision. The results of the experiments show that the proposed method is able to produced visualized data aesthetically by providing clearer views for connectivity and edge weights.

Keywords: clustering, force-directed, graph drawing, stock investment analysis

Procedia PDF Downloads 279
1056 Patients’ Trust in Health Care Systems

Authors: Dilara Usta, Fatos Korkmaz

Abstract:

Background: Individuals who utilise health services maintain relationships with health professionals, insurers and institutions. The nature of these relationships requires service receivers to have trust in the service providers because maintaining health services without reciprocal trust is very difficult. Therefore, individual evaluations of trust within the scope of health services have become increasingly important. Objective: To investigate patients’ trust in the health-care system and their relevant socio-demographical characteristics. Methods: This research was conducted using a descriptive design which included 493 literate patients aged 18-65 years who were hospitalised for a minimum of two days at public university and training&research hospitals in Ankara, Turkey. Patients’ trust in health-care professionals, insurers, and institutions were investigated. Data were collected using a demographic questionnaire and the Multidimensional Trust in Health-Care Systems Scale between September 2015 and April 2016. Results: The participants’ mean age was 47.7±13.1; 70% had a moderate income and 69% had a prior hospitalisation and 63.5% of the patients were satisfied with the health-care services. The mean Multidimensional Trust in Health-Care Systems Scale score for the sample was 61.5±8.3; the provider subscale had a mean of 38.1±5, the insurers subscale had a mean of 12.9±3.7, and institutions subscale had a mean of 10.6±1.9. Conclusion: Patients’ level of trust in the health-care system was above average and the trust level of the patients with higher educational and socio-economic levels was lower compared to the other patients. Health-care professionals should raise awareness about the significance of trust in the health-care system.

Keywords: delivery of health care, health care system, nursing, patients, trust

Procedia PDF Downloads 340
1055 Re-Analyzing Energy-Conscious Design

Authors: Svetlana Pushkar, Oleg Verbitsky

Abstract:

An energy-conscious design for a classroom in a hot-humid climate is reanalyzed. The hypothesis of this study is that use of photovoltaic (PV) electricity generation in building operation energy consumption will lead to re-analysis of the energy-conscious design. Therefore, the objective of this study is to reanalyze the energy-conscious design by evaluating the environmental impact of operational energy with PV electrical generation. Using the hierarchical design structure of Eco-indicator 99, the alternatives for energy-conscious variables are statistically evaluated by applying a two-stage nested (hierarchical) ANOVA. The recommendations for the preferred solutions for application of glazing types, wall insulation, roof insulation, window size, roof mass, and window shading design alternatives were changed (for example, glazing type recommendations were changed from low-emissivity glazing, green, and double- glazed windows to low-emissivity glazing only), whereas the applications for the lighting control system and infiltration are not changed. Such analysis of operational energy can be defined as environment-conscious analysis.

Keywords: ANOVA, Eco-Indicator 99, energy-conscious design, hot–humid climate, photovoltaic

Procedia PDF Downloads 164
1054 Prioritization Assessment of Housing Development Risk Factors: A Fuzzy Hierarchical Process-Based Approach

Authors: Yusuf Garba Baba

Abstract:

The construction industry and housing subsector are fraught with risks that have the potential of negatively impacting on the achievement of project objectives. The success or otherwise of most construction projects depends to large extent on how well these risks have been managed. The recent paradigm shift by the subsector to use of formal risk management approach in contrast to hitherto developed rules of thumb means that risks must not only be identified but also properly assessed and responded to in a systematic manner. The study focused on identifying risks associated with housing development projects and prioritisation assessment of the identified risks in order to provide basis for informed decision. The study used a three-step identification framework: review of literature for similar projects, expert consultation and questionnaire based survey to identify potential risk factors. Delphi survey method was employed in carrying out the relative prioritization assessment of the risks factors using computer-based Analytical Hierarchical Process (AHP) software. The results show that 19 out of the 50 risks significantly impact on housing development projects. The study concludes that although significant numbers of risk factors have been identified as having relevance and impacting to housing construction projects, economic risk group and, in particular, ‘changes in demand for houses’ is prioritised by most developers as posing a threat to the achievement of their housing development objectives. Unless these risks are carefully managed, their effects will continue to impede success in these projects. The study recommends the adoption and use of the combination of multi-technique identification framework and AHP prioritization assessment methodology as a suitable model for the assessment of risks in housing development projects.

Keywords: risk management, risk identification, risk analysis, analytic hierarchical process

Procedia PDF Downloads 91
1053 A Correlative Study of Heating Values of Saw Dust and Rice Husks in the Thermal Generation of Electricity

Authors: Muhammad Danladi, Muhammad Bura Garba, Muhammad Yahaya, Dahiru Muhammad

Abstract:

Biomass is one of the primary sources of energy supply, which contributes to about 78% of Nigeria. In this work, a comparative analysis of the heating values of sawdust and rice husks in the thermal generation of electricity was carried out. In the study, different masses of biomass were used and the corresponding electromotive force in millivolts was obtained. A graph of e.m.f was plotted against the mass of each biomass and a gradient was obtained. Bar graphs were plotted to represent the values of e.m.f and masses of the biomass. Also, a graph of e.m.f against eating values of sawdust and rice husks was plotted, and in each case, as the e.m.f increases also, the heating values increases. The result shows that saw dust with 0.033Mv/g gradient and 3.5 points of intercept had the highest gradient, followed by rice husks with 0.026Mv/g gradient and 2.6 points of intercept. It is, therefore, concluded that sawdust is the most efficient of the two types of biomass in the thermal generation of electricity.

Keywords: biomass, electricity, thermal, generation

Procedia PDF Downloads 65
1052 A Formal Property Verification for Aspect-Oriented Programs in Software Development

Authors: Moustapha Bande, Hakima Ould-Slimane, Hanifa Boucheneb

Abstract:

Software development for complex systems requires efficient and automatic tools that can be used to verify the satisfiability of some critical properties such as security ones. With the emergence of Aspect-Oriented Programming (AOP), considerable work has been done in order to better modularize the separation of concerns in the software design and implementation. The goal is to prevent the cross-cutting concerns to be scattered across the multiple modules of the program and tangled with other modules. One of the key challenges in the aspect-oriented programs is to be sure that all the pieces put together at the weaving time ensure the satisfiability of the overall system requirements. Our paper focuses on this problem and proposes a formal property verification approach for a given property from the woven program. The approach is based on the control flow graph (CFG) of the woven program, and the use of a satisfiability modulo theories (SMT) solver to check whether each property (represented par one aspect) is satisfied or not once the weaving is done.

Keywords: aspect-oriented programming, control flow graph, property verification, satisfiability modulo theories

Procedia PDF Downloads 150
1051 Time Series Analysis on the Production of Fruit Juice: A Case Study of National Horticultural Research Institute (Nihort) Ibadan, Oyo State

Authors: Abiodun Ayodele Sanyaolu

Abstract:

The research was carried out to investigate the time series analysis on quarterly production of fruit juice at the National Horticultural Research Institute Ibadan from 2010 to 2018. Documentary method of data collection was used, and the method of least square and moving average were used in the analysis. From the calculation and the graph, it was glaring that there was increase, decrease, and uniform movements in both the graph of the original data and the tabulated quarter values of the original data. Time series analysis was used to detect the trend in the highest number of fruit juice and it appears to be good over a period of time and the methods used to forecast are additive and multiplicative models. Since it was observed that the production of fruit juice is usually high in January of every year, it is strongly advised that National Horticultural Research Institute should make more provision for fruit juice storage outside this period of the year.

Keywords: fruit juice, least square, multiplicative models, time series

Procedia PDF Downloads 119
1050 Agglomerative Hierarchical Clustering Based on Morphmetric Parameters of the Populations of Labeo rohita

Authors: Fayyaz Rasool, Naureen Aziz Qureshi, Shakeela Parveen

Abstract:

Labeo rohita populations from five geographical locations from the hatchery and riverine system of Punjab-Pakistan were studied for the clustering on the basis of similarities and differences based on morphometric parameters within the species. Agglomerative Hierarchical Clustering (AHC) was done by using Pearson Correlation Coefficient and Unweighted Pair Group Method with Arithmetic Mean (UPGMA) as Agglomeration method by XLSTAT 2012 version 1.02. A dendrogram with the data on the morphometrics of the representative samples of each site divided the populations of Labeo rohita in to five major clusters or classes. The variance decomposition for the optimal classification values remained as 19.24% for within class variation, while 80.76% for the between class differences. The representative central objects of the each class, the distances between the class centroids and also the distance between the central objects of the classes were generated by the analysis. A measurable distinction between the classes of the populations of the Labeo rohita was indicated in this study which determined the impacts of changing environment and other possible factors influencing the variation level among the populations of the same species.

Keywords: AHC, Labeo rohita, hatchery, riverine, morphometric

Procedia PDF Downloads 423
1049 Hierarchical Cluster Analysis of Raw Milk Samples Obtained from Organic and Conventional Dairy Farming in Autonomous Province of Vojvodina, Serbia

Authors: Lidija Jevrić, Denis Kučević, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Milica Karadžić

Abstract:

In the present study, the Hierarchical Cluster Analysis (HCA) was applied in order to determine the differences between the milk samples originating from a conventional dairy farm (CF) and an organic dairy farm (OF) in AP Vojvodina, Republic of Serbia. The clustering was based on the basis of the average values of saturated fatty acids (SFA) content and unsaturated fatty acids (UFA) content obtained for every season. Therefore, the HCA included the annual SFA and UFA content values. The clustering procedure was carried out on the basis of Euclidean distances and Single linkage algorithm. The obtained dendrograms indicated that the clustering of UFA in OF was much more uniform compared to clustering of UFA in CF. In OF, spring stands out from the other months of the year. The same case can be noticed for CF, where winter is separated from the other months. The results could be expected because the composition of fatty acids content is greatly influenced by the season and nutrition of dairy cows during the year.

Keywords: chemometrics, clustering, food engineering, milk quality

Procedia PDF Downloads 254
1048 Hierarchical Queue-Based Task Scheduling with CloudSim

Authors: Wanqing You, Kai Qian, Ying Qian

Abstract:

The concepts of Cloud Computing provide users with infrastructure, platform and software as service, which make those services more accessible for people via Internet. To better analysis the performance of Cloud Computing provisioning policies as well as resources allocation strategies, a toolkit named CloudSim proposed. With CloudSim, the Cloud Computing environment can be easily constructed by modelling and simulating cloud computing components, such as datacenter, host, and virtual machine. A good scheduling strategy is the key to achieve the load balancing among different machines as well as to improve the utilization of basic resources. Recently, the existing scheduling algorithms may work well in some presumptive cases in a single machine; however they are unable to make the best decision for the unforeseen future. In real world scenario, there would be numbers of tasks as well as several virtual machines working in parallel. Based on the concepts of multi-queue, this paper presents a new scheduling algorithm to schedule tasks with CloudSim by taking into account several parameters, the machines’ capacity, the priority of tasks and the history log.

Keywords: hierarchical queue, load balancing, CloudSim, information technology

Procedia PDF Downloads 397
1047 Unlocking the Future of Grocery Shopping: Graph Neural Network-Based Cold Start Item Recommendations with Reverse Next Item Period Recommendation (RNPR)

Authors: Tesfaye Fenta Boka, Niu Zhendong

Abstract:

Recommender systems play a crucial role in connecting individuals with the items they require, as is particularly evident in the rapid growth of online grocery shopping platforms. These systems predominantly rely on user-centered recommendations, where items are suggested based on individual preferences, garnering considerable attention and adoption. However, our focus lies on the item-centered recommendation task within the grocery shopping context. In the reverse next item period recommendation (RNPR) task, we are presented with a specific item and challenged to identify potential users who are likely to consume it in the upcoming period. Despite the ever-expanding inventory of products on online grocery platforms, the cold start item problem persists, posing a substantial hurdle in delivering personalized and accurate recommendations for new or niche grocery items. To address this challenge, we propose a Graph Neural Network (GNN)-based approach. By capitalizing on the inherent relationships among grocery items and leveraging users' historical interactions, our model aims to provide reliable and context-aware recommendations for cold-start items. This integration of GNN technology holds the promise of enhancing recommendation accuracy and catering to users' individual preferences. This research contributes to the advancement of personalized recommendations in the online grocery shopping domain. By harnessing the potential of GNNs and exploring item-centered recommendation strategies, we aim to improve the overall shopping experience and satisfaction of users on these platforms.

Keywords: recommender systems, cold start item recommendations, online grocery shopping platforms, graph neural networks

Procedia PDF Downloads 60
1046 Linguistic Codes: Food as a Class Indicator

Authors: Elena Valeryevna Pozhidaeva

Abstract:

This linguistic case study is based on an interaction between the social position and foodways. In every culture there is a social hierarchical system in which there can be means to express and to identify the social status of a person. Food serves as a class indicator. The British being a verbal nation use the words as a preferred medium for signalling and recognising the social status. The linguistic analysis reflects a symbolic hierarchy determined by social groups in the UK. The linguistic class indicators of a British hierarchical system are detectable directly – in speech acts. They are articulated in every aspect of a national identity’s life from preferences of the food and the choice to call it to the names of the meals. The linguistic class indicators can as well be detected indirectly – through symbolic meaning or via the choice of the mealtime, its class (e.g the classes of tea or marmalade), the place to buy food (the class of the supermarket) and consume it (the places for eating out and the frequency of such practices). Under analysis of this study are not only food items and their names but also such categories as cutlery as a class indicator and the act of eating together as a practice of social significance and a class indicator. Current social changes and economic developments are considered and their influence on the class indicators appearance and transformation.

Keywords: linguistic, class, social indicator, English, food class

Procedia PDF Downloads 374
1045 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default

Procedia PDF Downloads 430
1044 Nanoarchitectures Cu2S Functions as Effective Surface-Enhanced Raman Scattering Substrates for Molecular Detection Application

Authors: Yu-Kuei Hsu, Ying-Chu Chen, Yan-Gu Lin

Abstract:

The hierarchical Cu2S nano structural film is successfully fabricated via an electroplated ZnO nanorod array as a template and subsequently chemical solution process for the growth of Cu2S in the application of surface-enhanced Raman scattering (SERS) detection. The as-grown Cu2S nano structures were thermally treated at temperature of 150-300 oC under nitrogen atmosphere to improve the crystal quality and unexpectedly induce the Cu nano particles on surface of Cu2S. The structure and composition of thermally treated Cu2S nano structures were carefully analyzed by SEM, XRD, XPS, and XAS. Using 4-aminothiophenol (4-ATP) as probing molecules, the SERS experiments showed that the thermally treated Cu2S nano structures exhibit excellent detecting performance, which could be used as active and cost-effective SERS substrate for ultra sensitive detecting. Additionally, this novel hierarchical SERS substrates show good reproducibility and a linear dependence between analyte concentrations and intensities, revealing the advantage of this method for easily scale-up production.

Keywords: cuprous sulfide, copper, nanostructures, surface-enhanced raman scattering

Procedia PDF Downloads 385
1043 Estimating the Probability of Winning the Best Actor/Actress Award Conditional on the Best Picture Nomination with Bayesian Hierarchical Models

Authors: Svetlana K. Eden

Abstract:

Movies and TV shows have long become part of modern culture. We all have our preferred genre, story, actors, and actresses. However, can we objectively discern good acting from the bad? As laymen, we are probably not objective, but what about the Oscar academy members? Are their votes based on objective measures? Oscar academy members are probably also biased due to many factors, including their professional affiliations or advertisement exposure. Heavily advertised films bring more publicity to their cast and are likely to have bigger budgets. Because a bigger budget may also help earn a Best Picture (BP) nomination, we hypothesize that best actor/actress (BA) nominees from BP-nominated movies would have higher chances of winning the award than those BA nominees from non-BP-nominated films. To test this hypothesis, three Bayesian hierarchical models are proposed, and their performance is evaluated. The results from all three models largely support our hypothesis. Depending on the proportion of BP nominations among BA nominees, the odds ratios (estimated over expected) of winning the BA award conditional on BP nomination vary from 2.8 [0.8-7.0] to 4.3 [2.0, 15.8] for actors and from 1.5 [0.0, 12.2] to 5.4 [2.7, 14.2] for actresses.

Keywords: Oscar, best picture, best actor/actress, bias

Procedia PDF Downloads 199
1042 Synthesis, Characterization, and Catalytic Application of Modified Hierarchical Zeolites

Authors: A. Feliczak Guzik, I. Nowak

Abstract:

Zeolites, classified as microporous materials, are a large group of crystalline aluminosilicate materials commonly used in the chemical industry. These materials are characterized by large specific surface area, high adsorption capacity, hydrothermal and thermal stability. However, the micropores present in them impose strong mass transfer limitations, resulting in low catalytic performance. Consequently, mesoporous (hierarchical) zeolites have attracted considerable attention from researchers. These materials possess additional porosity in the mesopore size region (2-50 nm according to IUPAC). Mesoporous zeolites, based on commercial MFI-type zeolites modified with silver, were synthesized as follows: 0.5 g of zeolite was dispersed in a mixture containing CTABr (template), water, ethanol, and ammonia under ultrasound for 30 min at 65°C. The silicon source, which was tetraethyl orthosilicate, was then added and stirred for 4 h. After this time, silver(I) nitrate was added. In a further step, the whole mixture was filtered and washed with water: ethanol mixture. The template was removed by calcination at 550°C for 5h. All the materials obtained were characterized by the following techniques: X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), nitrogen adsorption/desorption isotherms, FTIR spectroscopy. X-ray diffraction and low-temperature nitrogen adsorption/desorption isotherms revealed additional secondary porosity. Moreover, the structure of the commercial zeolite was preserved during most of the material syntheses. The aforementioned materials were used in the epoxidation reaction of cyclohexene using conventional heating and microwave radiation heating. The composition of the reaction mixture was analyzed every 1 h by gas chromatography. As a result, about 60% conversion of cyclohexene and high selectivity to the desired reaction products i.e., 1,2-epoxy cyclohexane and 1,2-cyclohexane diol, were obtained.

Keywords: catalytic application, characterization, epoxidation, hierarchical zeolites, synthesis

Procedia PDF Downloads 67